00:00:00.001 Started by upstream project "autotest-per-patch" build number 126262 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.173 > git --version # 'git version 2.39.2' 00:00:00.173 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.195 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.195 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.580 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.591 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.602 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.602 > git config core.sparsecheckout # timeout=10 00:00:04.613 > git read-tree -mu HEAD # timeout=10 00:00:04.628 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.644 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.644 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.711 [Pipeline] Start of Pipeline 00:00:04.727 [Pipeline] library 00:00:04.729 Loading library shm_lib@master 00:00:04.730 Library shm_lib@master is cached. Copying from home. 00:00:04.751 [Pipeline] node 00:00:04.759 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.761 [Pipeline] { 00:00:04.771 [Pipeline] catchError 00:00:04.772 [Pipeline] { 00:00:04.784 [Pipeline] wrap 00:00:04.791 [Pipeline] { 00:00:04.798 [Pipeline] stage 00:00:04.800 [Pipeline] { (Prologue) 00:00:04.962 [Pipeline] sh 00:00:05.243 + logger -p user.info -t JENKINS-CI 00:00:05.258 [Pipeline] echo 00:00:05.260 Node: GP6 00:00:05.266 [Pipeline] sh 00:00:05.589 [Pipeline] setCustomBuildProperty 00:00:05.602 [Pipeline] echo 00:00:05.603 Cleanup processes 00:00:05.608 [Pipeline] sh 00:00:05.886 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.886 3939324 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.899 [Pipeline] sh 00:00:06.183 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.183 ++ grep -v 'sudo pgrep' 00:00:06.183 ++ awk '{print $1}' 00:00:06.183 + sudo kill -9 00:00:06.183 + true 00:00:06.197 [Pipeline] cleanWs 00:00:06.206 [WS-CLEANUP] Deleting project workspace... 00:00:06.206 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.212 [WS-CLEANUP] done 00:00:06.216 [Pipeline] setCustomBuildProperty 00:00:06.231 [Pipeline] sh 00:00:06.535 + sudo git config --global --replace-all safe.directory '*' 00:00:06.604 [Pipeline] httpRequest 00:00:06.636 [Pipeline] echo 00:00:06.637 Sorcerer 10.211.164.101 is alive 00:00:06.642 [Pipeline] httpRequest 00:00:06.646 HttpMethod: GET 00:00:06.647 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.648 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.669 Response Code: HTTP/1.1 200 OK 00:00:06.669 Success: Status code 200 is in the accepted range: 200,404 00:00:06.670 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:28.814 [Pipeline] sh 00:00:29.102 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:29.121 [Pipeline] httpRequest 00:00:29.149 [Pipeline] echo 00:00:29.151 Sorcerer 10.211.164.101 is alive 00:00:29.161 [Pipeline] httpRequest 00:00:29.166 HttpMethod: GET 00:00:29.167 URL: http://10.211.164.101/packages/spdk_fd0bbcfdd4a0ab65bdf7a5643cbaf7e38b0ff1ff.tar.gz 00:00:29.168 Sending request to url: http://10.211.164.101/packages/spdk_fd0bbcfdd4a0ab65bdf7a5643cbaf7e38b0ff1ff.tar.gz 00:00:29.180 Response Code: HTTP/1.1 200 OK 00:00:29.180 Success: Status code 200 is in the accepted range: 200,404 00:00:29.181 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_fd0bbcfdd4a0ab65bdf7a5643cbaf7e38b0ff1ff.tar.gz 00:01:03.572 [Pipeline] sh 00:01:03.859 + tar --no-same-owner -xf spdk_fd0bbcfdd4a0ab65bdf7a5643cbaf7e38b0ff1ff.tar.gz 00:01:07.150 [Pipeline] sh 00:01:07.440 + git -C spdk log --oneline -n5 00:01:07.440 fd0bbcfdd fio/nvme: use socket_id when allocating io buffers 00:01:07.440 8c20d24e0 spdk_nvme_perf: allocate buffers from socket_id reported by ctrlr 00:01:07.440 e9e51ebfe nvme/pcie: allocate cq from device-local numa node's memory 00:01:07.440 fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:01:07.440 47ca8c1aa nvme: populate socket_id for rdma controllers 00:01:07.452 [Pipeline] } 00:01:07.462 [Pipeline] // stage 00:01:07.468 [Pipeline] stage 00:01:07.469 [Pipeline] { (Prepare) 00:01:07.483 [Pipeline] writeFile 00:01:07.493 [Pipeline] sh 00:01:07.769 + logger -p user.info -t JENKINS-CI 00:01:07.779 [Pipeline] sh 00:01:08.056 + logger -p user.info -t JENKINS-CI 00:01:08.067 [Pipeline] sh 00:01:08.349 + cat autorun-spdk.conf 00:01:08.349 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.349 SPDK_TEST_NVMF=1 00:01:08.349 SPDK_TEST_NVME_CLI=1 00:01:08.349 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.349 SPDK_TEST_NVMF_NICS=e810 00:01:08.349 SPDK_TEST_VFIOUSER=1 00:01:08.349 SPDK_RUN_UBSAN=1 00:01:08.349 NET_TYPE=phy 00:01:08.357 RUN_NIGHTLY=0 00:01:08.361 [Pipeline] readFile 00:01:08.389 [Pipeline] withEnv 00:01:08.391 [Pipeline] { 00:01:08.405 [Pipeline] sh 00:01:08.690 + set -ex 00:01:08.690 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:08.690 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.690 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.690 ++ SPDK_TEST_NVMF=1 00:01:08.690 ++ SPDK_TEST_NVME_CLI=1 00:01:08.690 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.690 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.690 ++ SPDK_TEST_VFIOUSER=1 00:01:08.690 ++ SPDK_RUN_UBSAN=1 00:01:08.690 ++ NET_TYPE=phy 00:01:08.690 ++ RUN_NIGHTLY=0 00:01:08.690 + case $SPDK_TEST_NVMF_NICS in 00:01:08.690 + DRIVERS=ice 00:01:08.690 + [[ tcp == \r\d\m\a ]] 00:01:08.690 + [[ -n ice ]] 00:01:08.690 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:08.690 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:08.690 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:08.690 rmmod: ERROR: Module irdma is not currently loaded 00:01:08.690 rmmod: ERROR: Module i40iw is not currently loaded 00:01:08.690 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:08.690 + true 00:01:08.690 + for D in $DRIVERS 00:01:08.690 + sudo modprobe ice 00:01:08.690 + exit 0 00:01:08.705 [Pipeline] } 00:01:08.723 [Pipeline] // withEnv 00:01:08.728 [Pipeline] } 00:01:08.741 [Pipeline] // stage 00:01:08.750 [Pipeline] catchError 00:01:08.752 [Pipeline] { 00:01:08.765 [Pipeline] timeout 00:01:08.765 Timeout set to expire in 50 min 00:01:08.767 [Pipeline] { 00:01:08.779 [Pipeline] stage 00:01:08.780 [Pipeline] { (Tests) 00:01:08.793 [Pipeline] sh 00:01:09.077 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.077 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.077 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.077 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:09.077 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.077 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.077 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:09.077 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.077 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.077 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.077 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:09.077 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.077 + source /etc/os-release 00:01:09.077 ++ NAME='Fedora Linux' 00:01:09.077 ++ VERSION='38 (Cloud Edition)' 00:01:09.077 ++ ID=fedora 00:01:09.077 ++ VERSION_ID=38 00:01:09.077 ++ VERSION_CODENAME= 00:01:09.077 ++ PLATFORM_ID=platform:f38 00:01:09.077 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:09.077 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.077 ++ LOGO=fedora-logo-icon 00:01:09.077 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:09.077 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.077 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:09.077 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.077 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.077 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.077 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:09.077 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.077 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:09.077 ++ SUPPORT_END=2024-05-14 00:01:09.077 ++ VARIANT='Cloud Edition' 00:01:09.077 ++ VARIANT_ID=cloud 00:01:09.077 + uname -a 00:01:09.077 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:09.077 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:10.451 Hugepages 00:01:10.451 node hugesize free / total 00:01:10.451 node0 1048576kB 0 / 0 00:01:10.451 node0 2048kB 0 / 0 00:01:10.451 node1 1048576kB 0 / 0 00:01:10.451 node1 2048kB 0 / 0 00:01:10.451 00:01:10.451 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:10.451 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:10.451 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:10.451 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:10.451 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:10.451 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:10.451 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:10.451 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:10.451 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:10.451 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:10.451 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:10.451 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:10.451 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:10.451 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:10.451 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:10.451 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:10.451 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:10.451 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:10.451 + rm -f /tmp/spdk-ld-path 00:01:10.451 + source autorun-spdk.conf 00:01:10.451 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.451 ++ SPDK_TEST_NVMF=1 00:01:10.451 ++ SPDK_TEST_NVME_CLI=1 00:01:10.451 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.451 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.451 ++ SPDK_TEST_VFIOUSER=1 00:01:10.451 ++ SPDK_RUN_UBSAN=1 00:01:10.451 ++ NET_TYPE=phy 00:01:10.451 ++ RUN_NIGHTLY=0 00:01:10.451 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:10.451 + [[ -n '' ]] 00:01:10.451 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.451 + for M in /var/spdk/build-*-manifest.txt 00:01:10.451 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:10.451 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.451 + for M in /var/spdk/build-*-manifest.txt 00:01:10.451 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:10.451 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.451 ++ uname 00:01:10.451 + [[ Linux == \L\i\n\u\x ]] 00:01:10.451 + sudo dmesg -T 00:01:10.451 + sudo dmesg --clear 00:01:10.451 + dmesg_pid=3940623 00:01:10.451 + [[ Fedora Linux == FreeBSD ]] 00:01:10.451 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.451 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.451 + sudo dmesg -Tw 00:01:10.451 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:10.451 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:10.451 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:10.451 + [[ -x /usr/src/fio-static/fio ]] 00:01:10.451 + export FIO_BIN=/usr/src/fio-static/fio 00:01:10.451 + FIO_BIN=/usr/src/fio-static/fio 00:01:10.451 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:10.451 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:10.451 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:10.451 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.451 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.451 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:10.451 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.451 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.451 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.451 Test configuration: 00:01:10.451 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.451 SPDK_TEST_NVMF=1 00:01:10.451 SPDK_TEST_NVME_CLI=1 00:01:10.451 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.451 SPDK_TEST_NVMF_NICS=e810 00:01:10.451 SPDK_TEST_VFIOUSER=1 00:01:10.451 SPDK_RUN_UBSAN=1 00:01:10.451 NET_TYPE=phy 00:01:10.451 RUN_NIGHTLY=0 00:54:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:10.451 00:54:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:10.451 00:54:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:10.451 00:54:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:10.451 00:54:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.451 00:54:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.451 00:54:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.452 00:54:26 -- paths/export.sh@5 -- $ export PATH 00:01:10.452 00:54:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.452 00:54:26 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:10.452 00:54:26 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:10.452 00:54:26 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721084066.XXXXXX 00:01:10.452 00:54:26 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721084066.fdfU2A 00:01:10.452 00:54:26 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:10.452 00:54:26 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:10.452 00:54:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:10.452 00:54:26 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:10.452 00:54:26 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:10.452 00:54:26 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:10.452 00:54:26 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:10.452 00:54:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.452 00:54:26 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:10.452 00:54:26 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:10.452 00:54:26 -- pm/common@17 -- $ local monitor 00:01:10.452 00:54:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.452 00:54:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.452 00:54:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.452 00:54:26 -- pm/common@21 -- $ date +%s 00:01:10.452 00:54:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.452 00:54:26 -- pm/common@21 -- $ date +%s 00:01:10.452 00:54:26 -- pm/common@25 -- $ sleep 1 00:01:10.452 00:54:26 -- pm/common@21 -- $ date +%s 00:01:10.452 00:54:26 -- pm/common@21 -- $ date +%s 00:01:10.452 00:54:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084066 00:01:10.452 00:54:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084066 00:01:10.452 00:54:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084066 00:01:10.452 00:54:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084066 00:01:10.452 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084066_collect-vmstat.pm.log 00:01:10.452 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084066_collect-cpu-load.pm.log 00:01:10.452 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084066_collect-cpu-temp.pm.log 00:01:10.452 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084066_collect-bmc-pm.bmc.pm.log 00:01:11.388 00:54:27 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:11.388 00:54:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:11.388 00:54:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:11.388 00:54:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.388 00:54:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:11.388 Mon Jul 15 10:54:27 PM UTC 2024 00:01:11.388 00:54:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:11.388 v24.09-pre-237-gfd0bbcfdd 00:01:11.388 00:54:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:11.388 00:54:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:11.388 00:54:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:11.388 00:54:27 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:11.388 00:54:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:11.388 00:54:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.388 ************************************ 00:01:11.388 START TEST ubsan 00:01:11.388 ************************************ 00:01:11.388 00:54:27 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:11.388 using ubsan 00:01:11.388 00:01:11.388 real 0m0.000s 00:01:11.388 user 0m0.000s 00:01:11.388 sys 0m0.000s 00:01:11.388 00:54:27 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:11.388 00:54:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:11.388 ************************************ 00:01:11.388 END TEST ubsan 00:01:11.388 ************************************ 00:01:11.646 00:54:27 -- common/autotest_common.sh@1142 -- $ return 0 00:01:11.646 00:54:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:11.646 00:54:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:11.646 00:54:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:11.646 00:54:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:11.646 00:54:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:11.646 00:54:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:11.646 00:54:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:11.646 00:54:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:11.646 00:54:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:11.646 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:11.646 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.905 Using 'verbs' RDMA provider 00:01:22.468 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:32.445 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:32.703 Creating mk/config.mk...done. 00:01:32.703 Creating mk/cc.flags.mk...done. 00:01:32.703 Type 'make' to build. 00:01:32.703 00:54:48 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:32.703 00:54:48 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:32.703 00:54:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:32.703 00:54:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.703 ************************************ 00:01:32.703 START TEST make 00:01:32.703 ************************************ 00:01:32.703 00:54:48 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:32.966 make[1]: Nothing to be done for 'all'. 00:01:34.354 The Meson build system 00:01:34.354 Version: 1.3.1 00:01:34.354 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:34.354 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.354 Build type: native build 00:01:34.354 Project name: libvfio-user 00:01:34.354 Project version: 0.0.1 00:01:34.354 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:34.354 C linker for the host machine: cc ld.bfd 2.39-16 00:01:34.354 Host machine cpu family: x86_64 00:01:34.354 Host machine cpu: x86_64 00:01:34.354 Run-time dependency threads found: YES 00:01:34.354 Library dl found: YES 00:01:34.354 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:34.354 Run-time dependency json-c found: YES 0.17 00:01:34.354 Run-time dependency cmocka found: YES 1.1.7 00:01:34.354 Program pytest-3 found: NO 00:01:34.354 Program flake8 found: NO 00:01:34.354 Program misspell-fixer found: NO 00:01:34.354 Program restructuredtext-lint found: NO 00:01:34.354 Program valgrind found: YES (/usr/bin/valgrind) 00:01:34.354 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.354 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.354 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.354 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.354 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:34.354 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:34.354 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.354 Build targets in project: 8 00:01:34.354 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:34.354 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:34.354 00:01:34.354 libvfio-user 0.0.1 00:01:34.354 00:01:34.354 User defined options 00:01:34.354 buildtype : debug 00:01:34.354 default_library: shared 00:01:34.354 libdir : /usr/local/lib 00:01:34.354 00:01:34.354 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.310 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.310 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:35.310 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:35.572 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:35.572 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:35.572 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:35.572 [6/37] Compiling C object samples/null.p/null.c.o 00:01:35.572 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:35.572 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:35.572 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:35.572 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:35.572 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:35.573 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:35.573 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:35.573 [14/37] Compiling C object samples/server.p/server.c.o 00:01:35.573 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:35.573 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:35.573 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:35.573 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:35.573 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:35.573 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:35.573 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:35.573 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:35.573 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:35.573 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:35.573 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:35.573 [26/37] Compiling C object samples/client.p/client.c.o 00:01:35.834 [27/37] Linking target samples/client 00:01:35.834 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:35.834 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:35.834 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:35.834 [31/37] Linking target test/unit_tests 00:01:36.095 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:36.095 [33/37] Linking target samples/null 00:01:36.095 [34/37] Linking target samples/server 00:01:36.095 [35/37] Linking target samples/gpio-pci-idio-16 00:01:36.095 [36/37] Linking target samples/lspci 00:01:36.095 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:36.095 INFO: autodetecting backend as ninja 00:01:36.095 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.355 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.937 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:36.937 ninja: no work to do. 00:01:41.120 The Meson build system 00:01:41.120 Version: 1.3.1 00:01:41.120 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:41.120 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:41.120 Build type: native build 00:01:41.120 Program cat found: YES (/usr/bin/cat) 00:01:41.120 Project name: DPDK 00:01:41.120 Project version: 24.03.0 00:01:41.120 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:41.120 C linker for the host machine: cc ld.bfd 2.39-16 00:01:41.120 Host machine cpu family: x86_64 00:01:41.120 Host machine cpu: x86_64 00:01:41.120 Message: ## Building in Developer Mode ## 00:01:41.120 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:41.120 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:41.120 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:41.120 Program python3 found: YES (/usr/bin/python3) 00:01:41.120 Program cat found: YES (/usr/bin/cat) 00:01:41.120 Compiler for C supports arguments -march=native: YES 00:01:41.120 Checking for size of "void *" : 8 00:01:41.120 Checking for size of "void *" : 8 (cached) 00:01:41.120 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:41.120 Library m found: YES 00:01:41.120 Library numa found: YES 00:01:41.120 Has header "numaif.h" : YES 00:01:41.120 Library fdt found: NO 00:01:41.120 Library execinfo found: NO 00:01:41.120 Has header "execinfo.h" : YES 00:01:41.120 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.120 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:41.120 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:41.120 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:41.120 Run-time dependency openssl found: YES 3.0.9 00:01:41.120 Run-time dependency libpcap found: YES 1.10.4 00:01:41.120 Has header "pcap.h" with dependency libpcap: YES 00:01:41.120 Compiler for C supports arguments -Wcast-qual: YES 00:01:41.120 Compiler for C supports arguments -Wdeprecated: YES 00:01:41.120 Compiler for C supports arguments -Wformat: YES 00:01:41.120 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:41.120 Compiler for C supports arguments -Wformat-security: NO 00:01:41.121 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.121 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:41.121 Compiler for C supports arguments -Wnested-externs: YES 00:01:41.121 Compiler for C supports arguments -Wold-style-definition: YES 00:01:41.121 Compiler for C supports arguments -Wpointer-arith: YES 00:01:41.121 Compiler for C supports arguments -Wsign-compare: YES 00:01:41.121 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:41.121 Compiler for C supports arguments -Wundef: YES 00:01:41.121 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.121 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:41.121 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:41.121 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.121 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:41.121 Program objdump found: YES (/usr/bin/objdump) 00:01:41.121 Compiler for C supports arguments -mavx512f: YES 00:01:41.121 Checking if "AVX512 checking" compiles: YES 00:01:41.121 Fetching value of define "__SSE4_2__" : 1 00:01:41.121 Fetching value of define "__AES__" : 1 00:01:41.121 Fetching value of define "__AVX__" : 1 00:01:41.121 Fetching value of define "__AVX2__" : (undefined) 00:01:41.121 Fetching value of define "__AVX512BW__" : (undefined) 00:01:41.121 Fetching value of define "__AVX512CD__" : (undefined) 00:01:41.121 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:41.121 Fetching value of define "__AVX512F__" : (undefined) 00:01:41.121 Fetching value of define "__AVX512VL__" : (undefined) 00:01:41.121 Fetching value of define "__PCLMUL__" : 1 00:01:41.121 Fetching value of define "__RDRND__" : 1 00:01:41.121 Fetching value of define "__RDSEED__" : (undefined) 00:01:41.121 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:41.121 Fetching value of define "__znver1__" : (undefined) 00:01:41.121 Fetching value of define "__znver2__" : (undefined) 00:01:41.121 Fetching value of define "__znver3__" : (undefined) 00:01:41.121 Fetching value of define "__znver4__" : (undefined) 00:01:41.121 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:41.121 Message: lib/log: Defining dependency "log" 00:01:41.121 Message: lib/kvargs: Defining dependency "kvargs" 00:01:41.121 Message: lib/telemetry: Defining dependency "telemetry" 00:01:41.121 Checking for function "getentropy" : NO 00:01:41.121 Message: lib/eal: Defining dependency "eal" 00:01:41.121 Message: lib/ring: Defining dependency "ring" 00:01:41.121 Message: lib/rcu: Defining dependency "rcu" 00:01:41.121 Message: lib/mempool: Defining dependency "mempool" 00:01:41.121 Message: lib/mbuf: Defining dependency "mbuf" 00:01:41.121 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:41.121 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:41.121 Compiler for C supports arguments -mpclmul: YES 00:01:41.121 Compiler for C supports arguments -maes: YES 00:01:41.121 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.121 Compiler for C supports arguments -mavx512bw: YES 00:01:41.121 Compiler for C supports arguments -mavx512dq: YES 00:01:41.121 Compiler for C supports arguments -mavx512vl: YES 00:01:41.121 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:41.121 Compiler for C supports arguments -mavx2: YES 00:01:41.121 Compiler for C supports arguments -mavx: YES 00:01:41.121 Message: lib/net: Defining dependency "net" 00:01:41.121 Message: lib/meter: Defining dependency "meter" 00:01:41.121 Message: lib/ethdev: Defining dependency "ethdev" 00:01:41.121 Message: lib/pci: Defining dependency "pci" 00:01:41.121 Message: lib/cmdline: Defining dependency "cmdline" 00:01:41.121 Message: lib/hash: Defining dependency "hash" 00:01:41.121 Message: lib/timer: Defining dependency "timer" 00:01:41.121 Message: lib/compressdev: Defining dependency "compressdev" 00:01:41.121 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:41.121 Message: lib/dmadev: Defining dependency "dmadev" 00:01:41.121 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:41.121 Message: lib/power: Defining dependency "power" 00:01:41.121 Message: lib/reorder: Defining dependency "reorder" 00:01:41.121 Message: lib/security: Defining dependency "security" 00:01:41.121 Has header "linux/userfaultfd.h" : YES 00:01:41.121 Has header "linux/vduse.h" : YES 00:01:41.121 Message: lib/vhost: Defining dependency "vhost" 00:01:41.121 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:41.121 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:41.121 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:41.121 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:41.121 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:41.121 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:41.121 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:41.121 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:41.121 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:41.121 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:41.121 Program doxygen found: YES (/usr/bin/doxygen) 00:01:41.121 Configuring doxy-api-html.conf using configuration 00:01:41.121 Configuring doxy-api-man.conf using configuration 00:01:41.121 Program mandb found: YES (/usr/bin/mandb) 00:01:41.121 Program sphinx-build found: NO 00:01:41.121 Configuring rte_build_config.h using configuration 00:01:41.121 Message: 00:01:41.121 ================= 00:01:41.121 Applications Enabled 00:01:41.121 ================= 00:01:41.121 00:01:41.121 apps: 00:01:41.121 00:01:41.121 00:01:41.121 Message: 00:01:41.121 ================= 00:01:41.121 Libraries Enabled 00:01:41.121 ================= 00:01:41.121 00:01:41.121 libs: 00:01:41.121 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:41.121 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:41.121 cryptodev, dmadev, power, reorder, security, vhost, 00:01:41.121 00:01:41.121 Message: 00:01:41.121 =============== 00:01:41.121 Drivers Enabled 00:01:41.121 =============== 00:01:41.121 00:01:41.121 common: 00:01:41.121 00:01:41.121 bus: 00:01:41.121 pci, vdev, 00:01:41.121 mempool: 00:01:41.121 ring, 00:01:41.121 dma: 00:01:41.121 00:01:41.121 net: 00:01:41.121 00:01:41.121 crypto: 00:01:41.121 00:01:41.121 compress: 00:01:41.121 00:01:41.121 vdpa: 00:01:41.121 00:01:41.121 00:01:41.121 Message: 00:01:41.121 ================= 00:01:41.121 Content Skipped 00:01:41.121 ================= 00:01:41.121 00:01:41.121 apps: 00:01:41.121 dumpcap: explicitly disabled via build config 00:01:41.121 graph: explicitly disabled via build config 00:01:41.121 pdump: explicitly disabled via build config 00:01:41.121 proc-info: explicitly disabled via build config 00:01:41.121 test-acl: explicitly disabled via build config 00:01:41.121 test-bbdev: explicitly disabled via build config 00:01:41.121 test-cmdline: explicitly disabled via build config 00:01:41.121 test-compress-perf: explicitly disabled via build config 00:01:41.121 test-crypto-perf: explicitly disabled via build config 00:01:41.121 test-dma-perf: explicitly disabled via build config 00:01:41.121 test-eventdev: explicitly disabled via build config 00:01:41.121 test-fib: explicitly disabled via build config 00:01:41.121 test-flow-perf: explicitly disabled via build config 00:01:41.121 test-gpudev: explicitly disabled via build config 00:01:41.121 test-mldev: explicitly disabled via build config 00:01:41.121 test-pipeline: explicitly disabled via build config 00:01:41.121 test-pmd: explicitly disabled via build config 00:01:41.121 test-regex: explicitly disabled via build config 00:01:41.121 test-sad: explicitly disabled via build config 00:01:41.121 test-security-perf: explicitly disabled via build config 00:01:41.121 00:01:41.121 libs: 00:01:41.121 argparse: explicitly disabled via build config 00:01:41.121 metrics: explicitly disabled via build config 00:01:41.121 acl: explicitly disabled via build config 00:01:41.121 bbdev: explicitly disabled via build config 00:01:41.121 bitratestats: explicitly disabled via build config 00:01:41.121 bpf: explicitly disabled via build config 00:01:41.121 cfgfile: explicitly disabled via build config 00:01:41.121 distributor: explicitly disabled via build config 00:01:41.121 efd: explicitly disabled via build config 00:01:41.121 eventdev: explicitly disabled via build config 00:01:41.121 dispatcher: explicitly disabled via build config 00:01:41.121 gpudev: explicitly disabled via build config 00:01:41.121 gro: explicitly disabled via build config 00:01:41.121 gso: explicitly disabled via build config 00:01:41.121 ip_frag: explicitly disabled via build config 00:01:41.121 jobstats: explicitly disabled via build config 00:01:41.121 latencystats: explicitly disabled via build config 00:01:41.121 lpm: explicitly disabled via build config 00:01:41.121 member: explicitly disabled via build config 00:01:41.121 pcapng: explicitly disabled via build config 00:01:41.121 rawdev: explicitly disabled via build config 00:01:41.121 regexdev: explicitly disabled via build config 00:01:41.121 mldev: explicitly disabled via build config 00:01:41.121 rib: explicitly disabled via build config 00:01:41.121 sched: explicitly disabled via build config 00:01:41.121 stack: explicitly disabled via build config 00:01:41.121 ipsec: explicitly disabled via build config 00:01:41.121 pdcp: explicitly disabled via build config 00:01:41.121 fib: explicitly disabled via build config 00:01:41.121 port: explicitly disabled via build config 00:01:41.121 pdump: explicitly disabled via build config 00:01:41.121 table: explicitly disabled via build config 00:01:41.121 pipeline: explicitly disabled via build config 00:01:41.121 graph: explicitly disabled via build config 00:01:41.121 node: explicitly disabled via build config 00:01:41.121 00:01:41.121 drivers: 00:01:41.121 common/cpt: not in enabled drivers build config 00:01:41.121 common/dpaax: not in enabled drivers build config 00:01:41.121 common/iavf: not in enabled drivers build config 00:01:41.121 common/idpf: not in enabled drivers build config 00:01:41.121 common/ionic: not in enabled drivers build config 00:01:41.121 common/mvep: not in enabled drivers build config 00:01:41.121 common/octeontx: not in enabled drivers build config 00:01:41.121 bus/auxiliary: not in enabled drivers build config 00:01:41.121 bus/cdx: not in enabled drivers build config 00:01:41.121 bus/dpaa: not in enabled drivers build config 00:01:41.121 bus/fslmc: not in enabled drivers build config 00:01:41.121 bus/ifpga: not in enabled drivers build config 00:01:41.121 bus/platform: not in enabled drivers build config 00:01:41.121 bus/uacce: not in enabled drivers build config 00:01:41.121 bus/vmbus: not in enabled drivers build config 00:01:41.121 common/cnxk: not in enabled drivers build config 00:01:41.121 common/mlx5: not in enabled drivers build config 00:01:41.121 common/nfp: not in enabled drivers build config 00:01:41.121 common/nitrox: not in enabled drivers build config 00:01:41.121 common/qat: not in enabled drivers build config 00:01:41.121 common/sfc_efx: not in enabled drivers build config 00:01:41.122 mempool/bucket: not in enabled drivers build config 00:01:41.122 mempool/cnxk: not in enabled drivers build config 00:01:41.122 mempool/dpaa: not in enabled drivers build config 00:01:41.122 mempool/dpaa2: not in enabled drivers build config 00:01:41.122 mempool/octeontx: not in enabled drivers build config 00:01:41.122 mempool/stack: not in enabled drivers build config 00:01:41.122 dma/cnxk: not in enabled drivers build config 00:01:41.122 dma/dpaa: not in enabled drivers build config 00:01:41.122 dma/dpaa2: not in enabled drivers build config 00:01:41.122 dma/hisilicon: not in enabled drivers build config 00:01:41.122 dma/idxd: not in enabled drivers build config 00:01:41.122 dma/ioat: not in enabled drivers build config 00:01:41.122 dma/skeleton: not in enabled drivers build config 00:01:41.122 net/af_packet: not in enabled drivers build config 00:01:41.122 net/af_xdp: not in enabled drivers build config 00:01:41.122 net/ark: not in enabled drivers build config 00:01:41.122 net/atlantic: not in enabled drivers build config 00:01:41.122 net/avp: not in enabled drivers build config 00:01:41.122 net/axgbe: not in enabled drivers build config 00:01:41.122 net/bnx2x: not in enabled drivers build config 00:01:41.122 net/bnxt: not in enabled drivers build config 00:01:41.122 net/bonding: not in enabled drivers build config 00:01:41.122 net/cnxk: not in enabled drivers build config 00:01:41.122 net/cpfl: not in enabled drivers build config 00:01:41.122 net/cxgbe: not in enabled drivers build config 00:01:41.122 net/dpaa: not in enabled drivers build config 00:01:41.122 net/dpaa2: not in enabled drivers build config 00:01:41.122 net/e1000: not in enabled drivers build config 00:01:41.122 net/ena: not in enabled drivers build config 00:01:41.122 net/enetc: not in enabled drivers build config 00:01:41.122 net/enetfec: not in enabled drivers build config 00:01:41.122 net/enic: not in enabled drivers build config 00:01:41.122 net/failsafe: not in enabled drivers build config 00:01:41.122 net/fm10k: not in enabled drivers build config 00:01:41.122 net/gve: not in enabled drivers build config 00:01:41.122 net/hinic: not in enabled drivers build config 00:01:41.122 net/hns3: not in enabled drivers build config 00:01:41.122 net/i40e: not in enabled drivers build config 00:01:41.122 net/iavf: not in enabled drivers build config 00:01:41.122 net/ice: not in enabled drivers build config 00:01:41.122 net/idpf: not in enabled drivers build config 00:01:41.122 net/igc: not in enabled drivers build config 00:01:41.122 net/ionic: not in enabled drivers build config 00:01:41.122 net/ipn3ke: not in enabled drivers build config 00:01:41.122 net/ixgbe: not in enabled drivers build config 00:01:41.122 net/mana: not in enabled drivers build config 00:01:41.122 net/memif: not in enabled drivers build config 00:01:41.122 net/mlx4: not in enabled drivers build config 00:01:41.122 net/mlx5: not in enabled drivers build config 00:01:41.122 net/mvneta: not in enabled drivers build config 00:01:41.122 net/mvpp2: not in enabled drivers build config 00:01:41.122 net/netvsc: not in enabled drivers build config 00:01:41.122 net/nfb: not in enabled drivers build config 00:01:41.122 net/nfp: not in enabled drivers build config 00:01:41.122 net/ngbe: not in enabled drivers build config 00:01:41.122 net/null: not in enabled drivers build config 00:01:41.122 net/octeontx: not in enabled drivers build config 00:01:41.122 net/octeon_ep: not in enabled drivers build config 00:01:41.122 net/pcap: not in enabled drivers build config 00:01:41.122 net/pfe: not in enabled drivers build config 00:01:41.122 net/qede: not in enabled drivers build config 00:01:41.122 net/ring: not in enabled drivers build config 00:01:41.122 net/sfc: not in enabled drivers build config 00:01:41.122 net/softnic: not in enabled drivers build config 00:01:41.122 net/tap: not in enabled drivers build config 00:01:41.122 net/thunderx: not in enabled drivers build config 00:01:41.122 net/txgbe: not in enabled drivers build config 00:01:41.122 net/vdev_netvsc: not in enabled drivers build config 00:01:41.122 net/vhost: not in enabled drivers build config 00:01:41.122 net/virtio: not in enabled drivers build config 00:01:41.122 net/vmxnet3: not in enabled drivers build config 00:01:41.122 raw/*: missing internal dependency, "rawdev" 00:01:41.122 crypto/armv8: not in enabled drivers build config 00:01:41.122 crypto/bcmfs: not in enabled drivers build config 00:01:41.122 crypto/caam_jr: not in enabled drivers build config 00:01:41.122 crypto/ccp: not in enabled drivers build config 00:01:41.122 crypto/cnxk: not in enabled drivers build config 00:01:41.122 crypto/dpaa_sec: not in enabled drivers build config 00:01:41.122 crypto/dpaa2_sec: not in enabled drivers build config 00:01:41.122 crypto/ipsec_mb: not in enabled drivers build config 00:01:41.122 crypto/mlx5: not in enabled drivers build config 00:01:41.122 crypto/mvsam: not in enabled drivers build config 00:01:41.122 crypto/nitrox: not in enabled drivers build config 00:01:41.122 crypto/null: not in enabled drivers build config 00:01:41.122 crypto/octeontx: not in enabled drivers build config 00:01:41.122 crypto/openssl: not in enabled drivers build config 00:01:41.122 crypto/scheduler: not in enabled drivers build config 00:01:41.122 crypto/uadk: not in enabled drivers build config 00:01:41.122 crypto/virtio: not in enabled drivers build config 00:01:41.122 compress/isal: not in enabled drivers build config 00:01:41.122 compress/mlx5: not in enabled drivers build config 00:01:41.122 compress/nitrox: not in enabled drivers build config 00:01:41.122 compress/octeontx: not in enabled drivers build config 00:01:41.122 compress/zlib: not in enabled drivers build config 00:01:41.122 regex/*: missing internal dependency, "regexdev" 00:01:41.122 ml/*: missing internal dependency, "mldev" 00:01:41.122 vdpa/ifc: not in enabled drivers build config 00:01:41.122 vdpa/mlx5: not in enabled drivers build config 00:01:41.122 vdpa/nfp: not in enabled drivers build config 00:01:41.122 vdpa/sfc: not in enabled drivers build config 00:01:41.122 event/*: missing internal dependency, "eventdev" 00:01:41.122 baseband/*: missing internal dependency, "bbdev" 00:01:41.122 gpu/*: missing internal dependency, "gpudev" 00:01:41.122 00:01:41.122 00:01:41.380 Build targets in project: 85 00:01:41.380 00:01:41.380 DPDK 24.03.0 00:01:41.380 00:01:41.380 User defined options 00:01:41.380 buildtype : debug 00:01:41.380 default_library : shared 00:01:41.380 libdir : lib 00:01:41.380 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:41.381 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:41.381 c_link_args : 00:01:41.381 cpu_instruction_set: native 00:01:41.381 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:41.381 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:41.381 enable_docs : false 00:01:41.381 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:41.381 enable_kmods : false 00:01:41.381 max_lcores : 128 00:01:41.381 tests : false 00:01:41.381 00:01:41.381 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.972 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:41.972 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.972 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.972 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.972 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:41.972 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.972 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.972 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.972 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:41.972 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.972 [10/268] Linking static target lib/librte_kvargs.a 00:01:41.972 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.972 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.972 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:42.231 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:42.231 [15/268] Linking static target lib/librte_log.a 00:01:42.231 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:42.895 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.895 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:42.896 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:42.896 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:42.896 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:42.896 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:42.896 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:42.896 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:42.896 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:42.896 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:42.896 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:42.896 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:42.896 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:42.896 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:42.896 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:42.896 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:42.896 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:42.896 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:42.896 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.896 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:42.896 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:42.896 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:42.896 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:42.896 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:42.896 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:42.896 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:42.896 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:42.896 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.896 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:42.896 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:42.896 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:42.896 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:43.167 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:43.167 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:43.167 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:43.167 [52/268] Linking static target lib/librte_telemetry.a 00:01:43.167 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:43.167 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:43.167 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:43.167 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:43.167 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:43.167 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:43.167 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:43.167 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:43.167 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:43.167 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:43.167 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:43.167 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:43.167 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:43.425 [66/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.425 [67/268] Linking static target lib/librte_pci.a 00:01:43.425 [68/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.689 [69/268] Linking target lib/librte_log.so.24.1 00:01:43.689 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:43.689 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:43.689 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:43.689 [73/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:43.689 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:43.689 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:43.959 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:43.959 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.959 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:43.959 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:43.959 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:43.959 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:43.959 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:43.959 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.959 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:43.959 [85/268] Linking target lib/librte_kvargs.so.24.1 00:01:43.959 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:43.959 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:43.959 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:43.959 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:43.959 [90/268] Linking static target lib/librte_ring.a 00:01:43.959 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:43.959 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:43.959 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:43.959 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:43.959 [95/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.959 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:43.959 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:43.959 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.959 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:43.959 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:43.959 [101/268] Linking static target lib/librte_meter.a 00:01:43.959 [102/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.959 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:43.959 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:43.959 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.959 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:44.219 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:44.219 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.219 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:44.219 [110/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.219 [111/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.219 [112/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:44.219 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:44.219 [114/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:44.219 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:44.219 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.219 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:44.219 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.219 [119/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.219 [120/268] Linking static target lib/librte_mempool.a 00:01:44.219 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.219 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.219 [123/268] Linking static target lib/librte_eal.a 00:01:44.219 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:44.219 [125/268] Linking static target lib/librte_rcu.a 00:01:44.219 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.219 [127/268] Linking target lib/librte_telemetry.so.24.1 00:01:44.483 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.483 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.483 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.483 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.483 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.483 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.483 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.483 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.747 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.747 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:44.747 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:44.747 [139/268] Linking static target lib/librte_net.a 00:01:44.747 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.747 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.747 [142/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:45.006 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.006 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:45.006 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.006 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.006 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.006 [148/268] Linking static target lib/librte_cmdline.a 00:01:45.006 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:45.006 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.006 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:45.006 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:45.006 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.006 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:45.006 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:45.006 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.006 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.006 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.006 [159/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.266 [160/268] Linking static target lib/librte_dmadev.a 00:01:45.266 [161/268] Linking static target lib/librte_timer.a 00:01:45.266 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:45.266 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:45.266 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:45.266 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.266 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.266 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.266 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.266 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:45.525 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:45.525 [171/268] Linking static target lib/librte_power.a 00:01:45.525 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:45.525 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.525 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.525 [175/268] Linking static target lib/librte_compressdev.a 00:01:45.525 [176/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.525 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.525 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.525 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:45.525 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.525 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.525 [182/268] Linking static target lib/librte_hash.a 00:01:45.525 [183/268] Linking static target lib/librte_reorder.a 00:01:45.525 [184/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:45.525 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:45.525 [186/268] Linking static target lib/librte_mbuf.a 00:01:45.525 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:45.525 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:45.525 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.525 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:45.783 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.783 [192/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:45.783 [193/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:45.783 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:45.783 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:45.783 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.783 [197/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.042 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.042 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.042 [200/268] Linking static target lib/librte_security.a 00:01:46.042 [201/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.042 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.042 [203/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.042 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.042 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.042 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.042 [207/268] Linking static target drivers/librte_bus_vdev.a 00:01:46.042 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.042 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.042 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.042 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.042 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:46.042 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.042 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.042 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.301 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.301 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.301 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.301 [219/268] Linking static target drivers/librte_mempool_ring.a 00:01:46.301 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.301 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.301 [222/268] Linking static target lib/librte_ethdev.a 00:01:46.301 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.301 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.301 [225/268] Linking static target lib/librte_cryptodev.a 00:01:46.301 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.673 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.603 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.497 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.497 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.497 [231/268] Linking target lib/librte_eal.so.24.1 00:01:50.753 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:50.753 [233/268] Linking target lib/librte_meter.so.24.1 00:01:50.753 [234/268] Linking target lib/librte_pci.so.24.1 00:01:50.753 [235/268] Linking target lib/librte_dmadev.so.24.1 00:01:50.753 [236/268] Linking target lib/librte_ring.so.24.1 00:01:50.753 [237/268] Linking target lib/librte_timer.so.24.1 00:01:50.753 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:50.753 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:50.753 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:51.009 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:51.009 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:51.009 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:51.009 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:51.009 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:51.009 [246/268] Linking target lib/librte_mempool.so.24.1 00:01:51.009 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:51.009 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:51.009 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:51.009 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:51.266 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:51.266 [252/268] Linking target lib/librte_compressdev.so.24.1 00:01:51.266 [253/268] Linking target lib/librte_net.so.24.1 00:01:51.266 [254/268] Linking target lib/librte_reorder.so.24.1 00:01:51.266 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:51.266 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:51.266 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:51.523 [258/268] Linking target lib/librte_security.so.24.1 00:01:51.523 [259/268] Linking target lib/librte_hash.so.24.1 00:01:51.523 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:51.523 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:51.523 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:51.523 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:51.523 [264/268] Linking target lib/librte_power.so.24.1 00:01:54.046 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.046 [266/268] Linking static target lib/librte_vhost.a 00:01:54.979 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.979 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:54.979 INFO: autodetecting backend as ninja 00:01:54.979 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:55.912 CC lib/ut_mock/mock.o 00:01:55.912 CC lib/log/log.o 00:01:55.912 CC lib/log/log_flags.o 00:01:55.912 CC lib/ut/ut.o 00:01:55.912 CC lib/log/log_deprecated.o 00:01:55.912 LIB libspdk_ut.a 00:01:55.912 LIB libspdk_log.a 00:01:55.912 LIB libspdk_ut_mock.a 00:01:56.171 SO libspdk_ut.so.2.0 00:01:56.171 SO libspdk_ut_mock.so.6.0 00:01:56.171 SO libspdk_log.so.7.0 00:01:56.171 SYMLINK libspdk_ut.so 00:01:56.171 SYMLINK libspdk_ut_mock.so 00:01:56.171 SYMLINK libspdk_log.so 00:01:56.171 CC lib/ioat/ioat.o 00:01:56.171 CXX lib/trace_parser/trace.o 00:01:56.171 CC lib/dma/dma.o 00:01:56.171 CC lib/util/bit_array.o 00:01:56.171 CC lib/util/base64.o 00:01:56.171 CC lib/util/cpuset.o 00:01:56.171 CC lib/util/crc16.o 00:01:56.171 CC lib/util/crc32.o 00:01:56.171 CC lib/util/crc32c.o 00:01:56.171 CC lib/util/crc32_ieee.o 00:01:56.171 CC lib/util/crc64.o 00:01:56.171 CC lib/util/dif.o 00:01:56.171 CC lib/util/fd.o 00:01:56.171 CC lib/util/fd_group.o 00:01:56.171 CC lib/util/file.o 00:01:56.171 CC lib/util/hexlify.o 00:01:56.171 CC lib/util/iov.o 00:01:56.171 CC lib/util/math.o 00:01:56.171 CC lib/util/net.o 00:01:56.171 CC lib/util/pipe.o 00:01:56.171 CC lib/util/strerror_tls.o 00:01:56.171 CC lib/util/string.o 00:01:56.171 CC lib/util/uuid.o 00:01:56.171 CC lib/util/xor.o 00:01:56.171 CC lib/util/zipf.o 00:01:56.429 CC lib/vfio_user/host/vfio_user_pci.o 00:01:56.429 CC lib/vfio_user/host/vfio_user.o 00:01:56.429 LIB libspdk_dma.a 00:01:56.429 SO libspdk_dma.so.4.0 00:01:56.686 SYMLINK libspdk_dma.so 00:01:56.686 LIB libspdk_vfio_user.a 00:01:56.686 LIB libspdk_ioat.a 00:01:56.686 SO libspdk_vfio_user.so.5.0 00:01:56.686 SO libspdk_ioat.so.7.0 00:01:56.686 SYMLINK libspdk_vfio_user.so 00:01:56.686 SYMLINK libspdk_ioat.so 00:01:56.949 LIB libspdk_util.a 00:01:56.949 SO libspdk_util.so.9.1 00:01:56.949 SYMLINK libspdk_util.so 00:01:57.208 CC lib/json/json_parse.o 00:01:57.208 CC lib/conf/conf.o 00:01:57.208 CC lib/idxd/idxd.o 00:01:57.208 CC lib/json/json_util.o 00:01:57.208 CC lib/idxd/idxd_user.o 00:01:57.208 CC lib/env_dpdk/env.o 00:01:57.208 CC lib/json/json_write.o 00:01:57.208 CC lib/idxd/idxd_kernel.o 00:01:57.208 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:57.208 CC lib/rdma_provider/common.o 00:01:57.208 CC lib/vmd/vmd.o 00:01:57.208 CC lib/vmd/led.o 00:01:57.208 CC lib/rdma_utils/rdma_utils.o 00:01:57.208 CC lib/env_dpdk/memory.o 00:01:57.208 CC lib/env_dpdk/pci.o 00:01:57.208 CC lib/env_dpdk/init.o 00:01:57.208 CC lib/env_dpdk/threads.o 00:01:57.208 CC lib/env_dpdk/pci_ioat.o 00:01:57.208 CC lib/env_dpdk/pci_virtio.o 00:01:57.208 CC lib/env_dpdk/pci_vmd.o 00:01:57.208 CC lib/env_dpdk/pci_idxd.o 00:01:57.208 CC lib/env_dpdk/pci_event.o 00:01:57.208 CC lib/env_dpdk/sigbus_handler.o 00:01:57.208 CC lib/env_dpdk/pci_dpdk.o 00:01:57.208 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.208 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.208 LIB libspdk_trace_parser.a 00:01:57.208 SO libspdk_trace_parser.so.5.0 00:01:57.470 SYMLINK libspdk_trace_parser.so 00:01:57.470 LIB libspdk_rdma_provider.a 00:01:57.470 SO libspdk_rdma_provider.so.6.0 00:01:57.470 SYMLINK libspdk_rdma_provider.so 00:01:57.470 LIB libspdk_rdma_utils.a 00:01:57.470 LIB libspdk_json.a 00:01:57.470 SO libspdk_rdma_utils.so.1.0 00:01:57.470 SO libspdk_json.so.6.0 00:01:57.470 LIB libspdk_conf.a 00:01:57.731 SO libspdk_conf.so.6.0 00:01:57.731 SYMLINK libspdk_rdma_utils.so 00:01:57.731 SYMLINK libspdk_json.so 00:01:57.731 SYMLINK libspdk_conf.so 00:01:57.731 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.731 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.731 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.731 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:57.731 LIB libspdk_idxd.a 00:01:57.731 SO libspdk_idxd.so.12.0 00:01:57.989 SYMLINK libspdk_idxd.so 00:01:57.989 LIB libspdk_vmd.a 00:01:57.989 SO libspdk_vmd.so.6.0 00:01:57.989 SYMLINK libspdk_vmd.so 00:01:57.989 LIB libspdk_jsonrpc.a 00:01:57.989 SO libspdk_jsonrpc.so.6.0 00:01:58.246 SYMLINK libspdk_jsonrpc.so 00:01:58.246 CC lib/rpc/rpc.o 00:01:58.504 LIB libspdk_rpc.a 00:01:58.504 SO libspdk_rpc.so.6.0 00:01:58.504 SYMLINK libspdk_rpc.so 00:01:58.762 CC lib/trace/trace.o 00:01:58.762 CC lib/trace/trace_flags.o 00:01:58.762 CC lib/keyring/keyring.o 00:01:58.762 CC lib/notify/notify.o 00:01:58.762 CC lib/keyring/keyring_rpc.o 00:01:58.762 CC lib/trace/trace_rpc.o 00:01:58.762 CC lib/notify/notify_rpc.o 00:01:59.020 LIB libspdk_notify.a 00:01:59.020 SO libspdk_notify.so.6.0 00:01:59.020 LIB libspdk_keyring.a 00:01:59.020 SYMLINK libspdk_notify.so 00:01:59.020 LIB libspdk_trace.a 00:01:59.020 SO libspdk_keyring.so.1.0 00:01:59.020 SO libspdk_trace.so.10.0 00:01:59.020 SYMLINK libspdk_keyring.so 00:01:59.020 SYMLINK libspdk_trace.so 00:01:59.311 CC lib/thread/thread.o 00:01:59.311 CC lib/thread/iobuf.o 00:01:59.311 CC lib/sock/sock.o 00:01:59.311 CC lib/sock/sock_rpc.o 00:01:59.311 LIB libspdk_env_dpdk.a 00:01:59.311 SO libspdk_env_dpdk.so.15.0 00:01:59.569 SYMLINK libspdk_env_dpdk.so 00:01:59.569 LIB libspdk_sock.a 00:01:59.828 SO libspdk_sock.so.10.0 00:01:59.828 SYMLINK libspdk_sock.so 00:02:00.086 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.086 CC lib/nvme/nvme_ctrlr.o 00:02:00.086 CC lib/nvme/nvme_fabric.o 00:02:00.086 CC lib/nvme/nvme_ns_cmd.o 00:02:00.086 CC lib/nvme/nvme_ns.o 00:02:00.086 CC lib/nvme/nvme_pcie_common.o 00:02:00.086 CC lib/nvme/nvme_pcie.o 00:02:00.086 CC lib/nvme/nvme_qpair.o 00:02:00.086 CC lib/nvme/nvme_quirks.o 00:02:00.086 CC lib/nvme/nvme.o 00:02:00.086 CC lib/nvme/nvme_transport.o 00:02:00.087 CC lib/nvme/nvme_discovery.o 00:02:00.087 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.087 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.087 CC lib/nvme/nvme_tcp.o 00:02:00.087 CC lib/nvme/nvme_opal.o 00:02:00.087 CC lib/nvme/nvme_io_msg.o 00:02:00.087 CC lib/nvme/nvme_poll_group.o 00:02:00.087 CC lib/nvme/nvme_zns.o 00:02:00.087 CC lib/nvme/nvme_stubs.o 00:02:00.087 CC lib/nvme/nvme_auth.o 00:02:00.087 CC lib/nvme/nvme_cuse.o 00:02:00.087 CC lib/nvme/nvme_vfio_user.o 00:02:00.087 CC lib/nvme/nvme_rdma.o 00:02:01.021 LIB libspdk_thread.a 00:02:01.021 SO libspdk_thread.so.10.1 00:02:01.021 SYMLINK libspdk_thread.so 00:02:01.021 CC lib/vfu_tgt/tgt_endpoint.o 00:02:01.021 CC lib/init/json_config.o 00:02:01.021 CC lib/virtio/virtio.o 00:02:01.021 CC lib/vfu_tgt/tgt_rpc.o 00:02:01.021 CC lib/init/subsystem.o 00:02:01.021 CC lib/virtio/virtio_vhost_user.o 00:02:01.021 CC lib/accel/accel.o 00:02:01.021 CC lib/init/subsystem_rpc.o 00:02:01.021 CC lib/blob/blobstore.o 00:02:01.021 CC lib/virtio/virtio_vfio_user.o 00:02:01.021 CC lib/accel/accel_rpc.o 00:02:01.021 CC lib/init/rpc.o 00:02:01.021 CC lib/blob/request.o 00:02:01.021 CC lib/virtio/virtio_pci.o 00:02:01.021 CC lib/accel/accel_sw.o 00:02:01.021 CC lib/blob/zeroes.o 00:02:01.021 CC lib/blob/blob_bs_dev.o 00:02:01.588 LIB libspdk_init.a 00:02:01.588 SO libspdk_init.so.5.0 00:02:01.588 LIB libspdk_vfu_tgt.a 00:02:01.588 LIB libspdk_virtio.a 00:02:01.588 SYMLINK libspdk_init.so 00:02:01.588 SO libspdk_vfu_tgt.so.3.0 00:02:01.588 SO libspdk_virtio.so.7.0 00:02:01.588 SYMLINK libspdk_vfu_tgt.so 00:02:01.588 SYMLINK libspdk_virtio.so 00:02:01.588 CC lib/event/app.o 00:02:01.588 CC lib/event/reactor.o 00:02:01.588 CC lib/event/log_rpc.o 00:02:01.588 CC lib/event/app_rpc.o 00:02:01.588 CC lib/event/scheduler_static.o 00:02:02.154 LIB libspdk_event.a 00:02:02.154 SO libspdk_event.so.14.0 00:02:02.154 LIB libspdk_accel.a 00:02:02.154 SO libspdk_accel.so.15.1 00:02:02.154 SYMLINK libspdk_event.so 00:02:02.154 SYMLINK libspdk_accel.so 00:02:02.411 LIB libspdk_nvme.a 00:02:02.411 CC lib/bdev/bdev.o 00:02:02.411 CC lib/bdev/bdev_rpc.o 00:02:02.411 CC lib/bdev/bdev_zone.o 00:02:02.411 CC lib/bdev/part.o 00:02:02.411 CC lib/bdev/scsi_nvme.o 00:02:02.411 SO libspdk_nvme.so.13.1 00:02:02.669 SYMLINK libspdk_nvme.so 00:02:04.043 LIB libspdk_blob.a 00:02:04.043 SO libspdk_blob.so.11.0 00:02:04.300 SYMLINK libspdk_blob.so 00:02:04.300 CC lib/lvol/lvol.o 00:02:04.300 CC lib/blobfs/blobfs.o 00:02:04.300 CC lib/blobfs/tree.o 00:02:05.238 LIB libspdk_bdev.a 00:02:05.238 SO libspdk_bdev.so.15.1 00:02:05.238 SYMLINK libspdk_bdev.so 00:02:05.238 LIB libspdk_blobfs.a 00:02:05.238 SO libspdk_blobfs.so.10.0 00:02:05.238 SYMLINK libspdk_blobfs.so 00:02:05.238 CC lib/scsi/dev.o 00:02:05.238 CC lib/scsi/lun.o 00:02:05.238 CC lib/scsi/port.o 00:02:05.238 CC lib/nvmf/ctrlr.o 00:02:05.238 CC lib/nbd/nbd.o 00:02:05.238 CC lib/ublk/ublk.o 00:02:05.238 CC lib/ftl/ftl_core.o 00:02:05.238 CC lib/ublk/ublk_rpc.o 00:02:05.238 CC lib/nvmf/ctrlr_discovery.o 00:02:05.238 CC lib/ftl/ftl_init.o 00:02:05.238 CC lib/nbd/nbd_rpc.o 00:02:05.238 CC lib/nvmf/ctrlr_bdev.o 00:02:05.238 CC lib/scsi/scsi.o 00:02:05.238 CC lib/ftl/ftl_layout.o 00:02:05.238 CC lib/scsi/scsi_bdev.o 00:02:05.238 CC lib/nvmf/subsystem.o 00:02:05.238 CC lib/ftl/ftl_debug.o 00:02:05.238 CC lib/scsi/scsi_pr.o 00:02:05.238 CC lib/nvmf/nvmf.o 00:02:05.238 CC lib/ftl/ftl_io.o 00:02:05.238 CC lib/scsi/scsi_rpc.o 00:02:05.238 CC lib/nvmf/nvmf_rpc.o 00:02:05.238 CC lib/ftl/ftl_sb.o 00:02:05.238 CC lib/scsi/task.o 00:02:05.238 CC lib/ftl/ftl_l2p.o 00:02:05.238 CC lib/ftl/ftl_l2p_flat.o 00:02:05.238 CC lib/nvmf/transport.o 00:02:05.238 CC lib/nvmf/tcp.o 00:02:05.238 CC lib/nvmf/stubs.o 00:02:05.238 CC lib/ftl/ftl_nv_cache.o 00:02:05.238 CC lib/nvmf/mdns_server.o 00:02:05.238 CC lib/ftl/ftl_band.o 00:02:05.238 CC lib/nvmf/vfio_user.o 00:02:05.238 CC lib/ftl/ftl_band_ops.o 00:02:05.238 CC lib/ftl/ftl_writer.o 00:02:05.238 CC lib/nvmf/auth.o 00:02:05.238 CC lib/nvmf/rdma.o 00:02:05.238 CC lib/ftl/ftl_rq.o 00:02:05.238 CC lib/ftl/ftl_reloc.o 00:02:05.238 CC lib/ftl/ftl_l2p_cache.o 00:02:05.238 CC lib/ftl/ftl_p2l.o 00:02:05.238 CC lib/ftl/mngt/ftl_mngt.o 00:02:05.238 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:05.238 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:05.238 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:05.238 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:05.238 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:05.238 LIB libspdk_lvol.a 00:02:05.499 SO libspdk_lvol.so.10.0 00:02:05.499 SYMLINK libspdk_lvol.so 00:02:05.499 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:05.764 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:05.764 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:05.764 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:05.764 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:05.764 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:05.764 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:05.764 CC lib/ftl/utils/ftl_conf.o 00:02:05.764 CC lib/ftl/utils/ftl_md.o 00:02:05.764 CC lib/ftl/utils/ftl_mempool.o 00:02:05.764 CC lib/ftl/utils/ftl_bitmap.o 00:02:05.764 CC lib/ftl/utils/ftl_property.o 00:02:05.764 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:05.764 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:05.764 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:05.764 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:05.764 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:05.764 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:05.764 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:06.024 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:06.024 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:06.024 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:06.025 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:06.025 CC lib/ftl/base/ftl_base_dev.o 00:02:06.025 CC lib/ftl/base/ftl_base_bdev.o 00:02:06.025 CC lib/ftl/ftl_trace.o 00:02:06.025 LIB libspdk_nbd.a 00:02:06.025 SO libspdk_nbd.so.7.0 00:02:06.286 SYMLINK libspdk_nbd.so 00:02:06.286 LIB libspdk_scsi.a 00:02:06.286 SO libspdk_scsi.so.9.0 00:02:06.286 LIB libspdk_ublk.a 00:02:06.286 SO libspdk_ublk.so.3.0 00:02:06.286 SYMLINK libspdk_scsi.so 00:02:06.545 SYMLINK libspdk_ublk.so 00:02:06.545 CC lib/vhost/vhost.o 00:02:06.545 CC lib/vhost/vhost_rpc.o 00:02:06.545 CC lib/vhost/vhost_scsi.o 00:02:06.545 CC lib/vhost/vhost_blk.o 00:02:06.546 CC lib/vhost/rte_vhost_user.o 00:02:06.546 CC lib/iscsi/conn.o 00:02:06.546 CC lib/iscsi/init_grp.o 00:02:06.546 CC lib/iscsi/iscsi.o 00:02:06.546 CC lib/iscsi/md5.o 00:02:06.546 CC lib/iscsi/param.o 00:02:06.546 CC lib/iscsi/portal_grp.o 00:02:06.546 CC lib/iscsi/tgt_node.o 00:02:06.546 CC lib/iscsi/iscsi_subsystem.o 00:02:06.546 CC lib/iscsi/iscsi_rpc.o 00:02:06.546 CC lib/iscsi/task.o 00:02:06.825 LIB libspdk_ftl.a 00:02:06.825 SO libspdk_ftl.so.9.0 00:02:07.394 SYMLINK libspdk_ftl.so 00:02:07.652 LIB libspdk_vhost.a 00:02:07.909 SO libspdk_vhost.so.8.0 00:02:07.909 LIB libspdk_nvmf.a 00:02:07.909 SO libspdk_nvmf.so.19.0 00:02:07.909 SYMLINK libspdk_vhost.so 00:02:07.909 LIB libspdk_iscsi.a 00:02:08.167 SO libspdk_iscsi.so.8.0 00:02:08.167 SYMLINK libspdk_nvmf.so 00:02:08.167 SYMLINK libspdk_iscsi.so 00:02:08.424 CC module/env_dpdk/env_dpdk_rpc.o 00:02:08.424 CC module/vfu_device/vfu_virtio.o 00:02:08.424 CC module/vfu_device/vfu_virtio_blk.o 00:02:08.424 CC module/vfu_device/vfu_virtio_scsi.o 00:02:08.424 CC module/vfu_device/vfu_virtio_rpc.o 00:02:08.685 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:08.685 CC module/keyring/linux/keyring.o 00:02:08.685 CC module/accel/dsa/accel_dsa.o 00:02:08.685 CC module/accel/error/accel_error.o 00:02:08.685 CC module/accel/dsa/accel_dsa_rpc.o 00:02:08.685 CC module/blob/bdev/blob_bdev.o 00:02:08.685 CC module/keyring/linux/keyring_rpc.o 00:02:08.685 CC module/accel/error/accel_error_rpc.o 00:02:08.685 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:08.685 CC module/scheduler/gscheduler/gscheduler.o 00:02:08.685 CC module/sock/posix/posix.o 00:02:08.685 CC module/keyring/file/keyring.o 00:02:08.685 CC module/accel/ioat/accel_ioat.o 00:02:08.685 CC module/accel/iaa/accel_iaa.o 00:02:08.685 CC module/keyring/file/keyring_rpc.o 00:02:08.685 CC module/accel/ioat/accel_ioat_rpc.o 00:02:08.685 CC module/accel/iaa/accel_iaa_rpc.o 00:02:08.685 LIB libspdk_env_dpdk_rpc.a 00:02:08.685 SO libspdk_env_dpdk_rpc.so.6.0 00:02:08.685 SYMLINK libspdk_env_dpdk_rpc.so 00:02:08.685 LIB libspdk_keyring_linux.a 00:02:08.685 LIB libspdk_keyring_file.a 00:02:08.685 LIB libspdk_scheduler_dpdk_governor.a 00:02:08.685 LIB libspdk_scheduler_gscheduler.a 00:02:08.685 SO libspdk_keyring_linux.so.1.0 00:02:08.685 SO libspdk_keyring_file.so.1.0 00:02:08.685 LIB libspdk_accel_error.a 00:02:08.685 SO libspdk_scheduler_gscheduler.so.4.0 00:02:08.685 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:08.685 LIB libspdk_accel_ioat.a 00:02:08.685 LIB libspdk_scheduler_dynamic.a 00:02:08.685 SO libspdk_accel_error.so.2.0 00:02:08.685 LIB libspdk_accel_iaa.a 00:02:08.943 SO libspdk_accel_ioat.so.6.0 00:02:08.943 SO libspdk_scheduler_dynamic.so.4.0 00:02:08.943 SYMLINK libspdk_keyring_linux.so 00:02:08.943 SYMLINK libspdk_scheduler_gscheduler.so 00:02:08.943 SYMLINK libspdk_keyring_file.so 00:02:08.943 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:08.943 SO libspdk_accel_iaa.so.3.0 00:02:08.943 SYMLINK libspdk_accel_error.so 00:02:08.943 LIB libspdk_accel_dsa.a 00:02:08.943 SYMLINK libspdk_scheduler_dynamic.so 00:02:08.943 SYMLINK libspdk_accel_ioat.so 00:02:08.943 LIB libspdk_blob_bdev.a 00:02:08.943 SO libspdk_accel_dsa.so.5.0 00:02:08.943 SYMLINK libspdk_accel_iaa.so 00:02:08.943 SO libspdk_blob_bdev.so.11.0 00:02:08.943 SYMLINK libspdk_blob_bdev.so 00:02:08.943 SYMLINK libspdk_accel_dsa.so 00:02:09.203 LIB libspdk_vfu_device.a 00:02:09.203 SO libspdk_vfu_device.so.3.0 00:02:09.203 CC module/bdev/delay/vbdev_delay.o 00:02:09.203 CC module/bdev/lvol/vbdev_lvol.o 00:02:09.203 CC module/bdev/gpt/gpt.o 00:02:09.203 CC module/blobfs/bdev/blobfs_bdev.o 00:02:09.203 CC module/bdev/malloc/bdev_malloc.o 00:02:09.203 CC module/bdev/null/bdev_null.o 00:02:09.203 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:09.203 CC module/bdev/gpt/vbdev_gpt.o 00:02:09.203 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:09.203 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:09.203 CC module/bdev/null/bdev_null_rpc.o 00:02:09.203 CC module/bdev/nvme/bdev_nvme.o 00:02:09.203 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:09.203 CC module/bdev/raid/bdev_raid.o 00:02:09.203 CC module/bdev/error/vbdev_error.o 00:02:09.203 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:09.203 CC module/bdev/raid/bdev_raid_rpc.o 00:02:09.203 CC module/bdev/aio/bdev_aio.o 00:02:09.203 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:09.203 CC module/bdev/error/vbdev_error_rpc.o 00:02:09.203 CC module/bdev/raid/bdev_raid_sb.o 00:02:09.203 CC module/bdev/nvme/nvme_rpc.o 00:02:09.203 CC module/bdev/aio/bdev_aio_rpc.o 00:02:09.203 CC module/bdev/split/vbdev_split.o 00:02:09.203 CC module/bdev/iscsi/bdev_iscsi.o 00:02:09.203 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:09.203 CC module/bdev/raid/raid0.o 00:02:09.203 CC module/bdev/nvme/bdev_mdns_client.o 00:02:09.203 CC module/bdev/ftl/bdev_ftl.o 00:02:09.203 CC module/bdev/split/vbdev_split_rpc.o 00:02:09.203 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:09.203 CC module/bdev/raid/raid1.o 00:02:09.203 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:09.203 CC module/bdev/nvme/vbdev_opal.o 00:02:09.203 CC module/bdev/raid/concat.o 00:02:09.203 CC module/bdev/passthru/vbdev_passthru.o 00:02:09.203 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:09.203 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:09.203 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:09.203 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:09.203 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:09.203 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:09.203 SYMLINK libspdk_vfu_device.so 00:02:09.461 LIB libspdk_sock_posix.a 00:02:09.461 SO libspdk_sock_posix.so.6.0 00:02:09.461 LIB libspdk_blobfs_bdev.a 00:02:09.719 SYMLINK libspdk_sock_posix.so 00:02:09.719 SO libspdk_blobfs_bdev.so.6.0 00:02:09.719 LIB libspdk_bdev_split.a 00:02:09.719 SO libspdk_bdev_split.so.6.0 00:02:09.719 LIB libspdk_bdev_null.a 00:02:09.719 LIB libspdk_bdev_gpt.a 00:02:09.719 LIB libspdk_bdev_error.a 00:02:09.719 SYMLINK libspdk_blobfs_bdev.so 00:02:09.719 SO libspdk_bdev_gpt.so.6.0 00:02:09.719 SO libspdk_bdev_null.so.6.0 00:02:09.719 LIB libspdk_bdev_ftl.a 00:02:09.719 SO libspdk_bdev_error.so.6.0 00:02:09.719 LIB libspdk_bdev_delay.a 00:02:09.719 SYMLINK libspdk_bdev_split.so 00:02:09.719 LIB libspdk_bdev_aio.a 00:02:09.719 SO libspdk_bdev_ftl.so.6.0 00:02:09.719 SO libspdk_bdev_delay.so.6.0 00:02:09.719 SYMLINK libspdk_bdev_null.so 00:02:09.719 SYMLINK libspdk_bdev_gpt.so 00:02:09.719 LIB libspdk_bdev_zone_block.a 00:02:09.719 LIB libspdk_bdev_malloc.a 00:02:09.719 SO libspdk_bdev_aio.so.6.0 00:02:09.719 SYMLINK libspdk_bdev_error.so 00:02:09.719 LIB libspdk_bdev_passthru.a 00:02:09.719 SO libspdk_bdev_zone_block.so.6.0 00:02:09.719 SO libspdk_bdev_malloc.so.6.0 00:02:09.719 SYMLINK libspdk_bdev_ftl.so 00:02:09.719 SYMLINK libspdk_bdev_delay.so 00:02:09.719 SO libspdk_bdev_passthru.so.6.0 00:02:09.719 LIB libspdk_bdev_iscsi.a 00:02:09.719 SYMLINK libspdk_bdev_aio.so 00:02:09.719 LIB libspdk_bdev_lvol.a 00:02:09.719 SO libspdk_bdev_iscsi.so.6.0 00:02:09.977 SYMLINK libspdk_bdev_zone_block.so 00:02:09.977 SYMLINK libspdk_bdev_malloc.so 00:02:09.977 SO libspdk_bdev_lvol.so.6.0 00:02:09.977 SYMLINK libspdk_bdev_passthru.so 00:02:09.977 SYMLINK libspdk_bdev_iscsi.so 00:02:09.977 SYMLINK libspdk_bdev_lvol.so 00:02:09.977 LIB libspdk_bdev_virtio.a 00:02:09.977 SO libspdk_bdev_virtio.so.6.0 00:02:09.977 SYMLINK libspdk_bdev_virtio.so 00:02:10.544 LIB libspdk_bdev_raid.a 00:02:10.544 SO libspdk_bdev_raid.so.6.0 00:02:10.544 SYMLINK libspdk_bdev_raid.so 00:02:11.477 LIB libspdk_bdev_nvme.a 00:02:11.477 SO libspdk_bdev_nvme.so.7.0 00:02:11.734 SYMLINK libspdk_bdev_nvme.so 00:02:11.992 CC module/event/subsystems/vmd/vmd.o 00:02:11.992 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.992 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:11.992 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.992 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.992 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.992 CC module/event/subsystems/keyring/keyring.o 00:02:11.992 CC module/event/subsystems/sock/sock.o 00:02:11.992 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:12.251 LIB libspdk_event_keyring.a 00:02:12.251 LIB libspdk_event_vhost_blk.a 00:02:12.251 LIB libspdk_event_vfu_tgt.a 00:02:12.251 LIB libspdk_event_vmd.a 00:02:12.251 LIB libspdk_event_scheduler.a 00:02:12.251 LIB libspdk_event_sock.a 00:02:12.251 LIB libspdk_event_iobuf.a 00:02:12.251 SO libspdk_event_keyring.so.1.0 00:02:12.251 SO libspdk_event_vfu_tgt.so.3.0 00:02:12.251 SO libspdk_event_vhost_blk.so.3.0 00:02:12.251 SO libspdk_event_sock.so.5.0 00:02:12.251 SO libspdk_event_scheduler.so.4.0 00:02:12.251 SO libspdk_event_vmd.so.6.0 00:02:12.251 SO libspdk_event_iobuf.so.3.0 00:02:12.251 SYMLINK libspdk_event_keyring.so 00:02:12.251 SYMLINK libspdk_event_vfu_tgt.so 00:02:12.251 SYMLINK libspdk_event_vhost_blk.so 00:02:12.251 SYMLINK libspdk_event_sock.so 00:02:12.251 SYMLINK libspdk_event_scheduler.so 00:02:12.251 SYMLINK libspdk_event_vmd.so 00:02:12.251 SYMLINK libspdk_event_iobuf.so 00:02:12.509 CC module/event/subsystems/accel/accel.o 00:02:12.768 LIB libspdk_event_accel.a 00:02:12.768 SO libspdk_event_accel.so.6.0 00:02:12.768 SYMLINK libspdk_event_accel.so 00:02:13.027 CC module/event/subsystems/bdev/bdev.o 00:02:13.027 LIB libspdk_event_bdev.a 00:02:13.027 SO libspdk_event_bdev.so.6.0 00:02:13.284 SYMLINK libspdk_event_bdev.so 00:02:13.284 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:13.284 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:13.284 CC module/event/subsystems/nbd/nbd.o 00:02:13.284 CC module/event/subsystems/ublk/ublk.o 00:02:13.284 CC module/event/subsystems/scsi/scsi.o 00:02:13.543 LIB libspdk_event_nbd.a 00:02:13.543 LIB libspdk_event_ublk.a 00:02:13.543 SO libspdk_event_nbd.so.6.0 00:02:13.543 LIB libspdk_event_scsi.a 00:02:13.543 SO libspdk_event_ublk.so.3.0 00:02:13.543 SO libspdk_event_scsi.so.6.0 00:02:13.543 SYMLINK libspdk_event_nbd.so 00:02:13.543 SYMLINK libspdk_event_ublk.so 00:02:13.543 SYMLINK libspdk_event_scsi.so 00:02:13.543 LIB libspdk_event_nvmf.a 00:02:13.543 SO libspdk_event_nvmf.so.6.0 00:02:13.543 SYMLINK libspdk_event_nvmf.so 00:02:13.801 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:13.801 CC module/event/subsystems/iscsi/iscsi.o 00:02:13.801 LIB libspdk_event_vhost_scsi.a 00:02:13.801 SO libspdk_event_vhost_scsi.so.3.0 00:02:13.801 LIB libspdk_event_iscsi.a 00:02:14.058 SO libspdk_event_iscsi.so.6.0 00:02:14.058 SYMLINK libspdk_event_vhost_scsi.so 00:02:14.058 SYMLINK libspdk_event_iscsi.so 00:02:14.058 SO libspdk.so.6.0 00:02:14.059 SYMLINK libspdk.so 00:02:14.323 CXX app/trace/trace.o 00:02:14.323 CC app/trace_record/trace_record.o 00:02:14.323 CC app/spdk_lspci/spdk_lspci.o 00:02:14.323 CC app/spdk_top/spdk_top.o 00:02:14.323 CC app/spdk_nvme_discover/discovery_aer.o 00:02:14.323 CC app/spdk_nvme_perf/perf.o 00:02:14.323 CC app/spdk_nvme_identify/identify.o 00:02:14.323 CC test/rpc_client/rpc_client_test.o 00:02:14.323 TEST_HEADER include/spdk/accel.h 00:02:14.323 TEST_HEADER include/spdk/accel_module.h 00:02:14.323 TEST_HEADER include/spdk/assert.h 00:02:14.323 TEST_HEADER include/spdk/barrier.h 00:02:14.323 TEST_HEADER include/spdk/base64.h 00:02:14.323 TEST_HEADER include/spdk/bdev.h 00:02:14.323 TEST_HEADER include/spdk/bdev_module.h 00:02:14.323 TEST_HEADER include/spdk/bdev_zone.h 00:02:14.323 TEST_HEADER include/spdk/bit_array.h 00:02:14.323 TEST_HEADER include/spdk/bit_pool.h 00:02:14.323 TEST_HEADER include/spdk/blob_bdev.h 00:02:14.323 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:14.323 TEST_HEADER include/spdk/blobfs.h 00:02:14.323 TEST_HEADER include/spdk/blob.h 00:02:14.323 TEST_HEADER include/spdk/conf.h 00:02:14.323 TEST_HEADER include/spdk/config.h 00:02:14.323 TEST_HEADER include/spdk/cpuset.h 00:02:14.323 TEST_HEADER include/spdk/crc16.h 00:02:14.323 TEST_HEADER include/spdk/crc32.h 00:02:14.323 TEST_HEADER include/spdk/crc64.h 00:02:14.323 TEST_HEADER include/spdk/dif.h 00:02:14.324 TEST_HEADER include/spdk/dma.h 00:02:14.324 TEST_HEADER include/spdk/endian.h 00:02:14.324 TEST_HEADER include/spdk/env_dpdk.h 00:02:14.324 TEST_HEADER include/spdk/env.h 00:02:14.324 TEST_HEADER include/spdk/event.h 00:02:14.324 TEST_HEADER include/spdk/fd_group.h 00:02:14.324 TEST_HEADER include/spdk/file.h 00:02:14.324 TEST_HEADER include/spdk/fd.h 00:02:14.324 TEST_HEADER include/spdk/ftl.h 00:02:14.324 TEST_HEADER include/spdk/gpt_spec.h 00:02:14.324 TEST_HEADER include/spdk/hexlify.h 00:02:14.324 TEST_HEADER include/spdk/histogram_data.h 00:02:14.324 TEST_HEADER include/spdk/idxd.h 00:02:14.324 TEST_HEADER include/spdk/idxd_spec.h 00:02:14.324 TEST_HEADER include/spdk/init.h 00:02:14.324 TEST_HEADER include/spdk/ioat.h 00:02:14.324 TEST_HEADER include/spdk/ioat_spec.h 00:02:14.324 TEST_HEADER include/spdk/iscsi_spec.h 00:02:14.324 TEST_HEADER include/spdk/json.h 00:02:14.324 TEST_HEADER include/spdk/jsonrpc.h 00:02:14.324 TEST_HEADER include/spdk/keyring.h 00:02:14.324 TEST_HEADER include/spdk/keyring_module.h 00:02:14.324 TEST_HEADER include/spdk/likely.h 00:02:14.324 TEST_HEADER include/spdk/log.h 00:02:14.324 TEST_HEADER include/spdk/lvol.h 00:02:14.324 TEST_HEADER include/spdk/memory.h 00:02:14.324 TEST_HEADER include/spdk/mmio.h 00:02:14.324 TEST_HEADER include/spdk/nbd.h 00:02:14.324 TEST_HEADER include/spdk/net.h 00:02:14.324 TEST_HEADER include/spdk/notify.h 00:02:14.324 TEST_HEADER include/spdk/nvme.h 00:02:14.324 TEST_HEADER include/spdk/nvme_intel.h 00:02:14.324 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:14.324 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:14.324 TEST_HEADER include/spdk/nvme_spec.h 00:02:14.324 TEST_HEADER include/spdk/nvme_zns.h 00:02:14.324 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:14.324 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:14.324 TEST_HEADER include/spdk/nvmf.h 00:02:14.324 TEST_HEADER include/spdk/nvmf_transport.h 00:02:14.324 TEST_HEADER include/spdk/nvmf_spec.h 00:02:14.324 TEST_HEADER include/spdk/opal.h 00:02:14.324 TEST_HEADER include/spdk/opal_spec.h 00:02:14.324 TEST_HEADER include/spdk/pci_ids.h 00:02:14.324 TEST_HEADER include/spdk/pipe.h 00:02:14.324 TEST_HEADER include/spdk/queue.h 00:02:14.324 TEST_HEADER include/spdk/reduce.h 00:02:14.324 TEST_HEADER include/spdk/rpc.h 00:02:14.324 TEST_HEADER include/spdk/scheduler.h 00:02:14.324 TEST_HEADER include/spdk/scsi.h 00:02:14.324 TEST_HEADER include/spdk/scsi_spec.h 00:02:14.324 TEST_HEADER include/spdk/sock.h 00:02:14.324 TEST_HEADER include/spdk/stdinc.h 00:02:14.324 TEST_HEADER include/spdk/string.h 00:02:14.324 TEST_HEADER include/spdk/thread.h 00:02:14.324 TEST_HEADER include/spdk/trace.h 00:02:14.324 TEST_HEADER include/spdk/trace_parser.h 00:02:14.324 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:14.324 TEST_HEADER include/spdk/ublk.h 00:02:14.324 TEST_HEADER include/spdk/tree.h 00:02:14.324 TEST_HEADER include/spdk/util.h 00:02:14.324 TEST_HEADER include/spdk/uuid.h 00:02:14.324 TEST_HEADER include/spdk/version.h 00:02:14.324 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:14.324 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:14.324 TEST_HEADER include/spdk/vhost.h 00:02:14.324 TEST_HEADER include/spdk/vmd.h 00:02:14.324 TEST_HEADER include/spdk/xor.h 00:02:14.324 TEST_HEADER include/spdk/zipf.h 00:02:14.324 CXX test/cpp_headers/accel.o 00:02:14.324 CXX test/cpp_headers/accel_module.o 00:02:14.324 CXX test/cpp_headers/assert.o 00:02:14.324 CC app/spdk_dd/spdk_dd.o 00:02:14.324 CXX test/cpp_headers/barrier.o 00:02:14.324 CXX test/cpp_headers/base64.o 00:02:14.324 CXX test/cpp_headers/bdev.o 00:02:14.324 CXX test/cpp_headers/bdev_module.o 00:02:14.324 CXX test/cpp_headers/bdev_zone.o 00:02:14.324 CXX test/cpp_headers/bit_array.o 00:02:14.324 CXX test/cpp_headers/bit_pool.o 00:02:14.324 CXX test/cpp_headers/blob_bdev.o 00:02:14.324 CXX test/cpp_headers/blobfs_bdev.o 00:02:14.324 CC app/iscsi_tgt/iscsi_tgt.o 00:02:14.324 CXX test/cpp_headers/blobfs.o 00:02:14.324 CXX test/cpp_headers/blob.o 00:02:14.324 CXX test/cpp_headers/conf.o 00:02:14.324 CC app/nvmf_tgt/nvmf_main.o 00:02:14.324 CXX test/cpp_headers/config.o 00:02:14.324 CXX test/cpp_headers/cpuset.o 00:02:14.324 CXX test/cpp_headers/crc16.o 00:02:14.324 CC app/spdk_tgt/spdk_tgt.o 00:02:14.324 CXX test/cpp_headers/crc32.o 00:02:14.324 CC examples/util/zipf/zipf.o 00:02:14.324 CC examples/ioat/perf/perf.o 00:02:14.324 CC examples/ioat/verify/verify.o 00:02:14.324 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:14.324 CC test/env/memory/memory_ut.o 00:02:14.324 CC test/thread/poller_perf/poller_perf.o 00:02:14.324 CC test/env/pci/pci_ut.o 00:02:14.324 CC test/app/histogram_perf/histogram_perf.o 00:02:14.324 CC app/fio/nvme/fio_plugin.o 00:02:14.324 CC test/app/jsoncat/jsoncat.o 00:02:14.583 CC test/env/vtophys/vtophys.o 00:02:14.583 CC test/app/stub/stub.o 00:02:14.583 CC test/dma/test_dma/test_dma.o 00:02:14.583 CC test/app/bdev_svc/bdev_svc.o 00:02:14.583 CC app/fio/bdev/fio_plugin.o 00:02:14.583 LINK spdk_lspci 00:02:14.583 CC test/env/mem_callbacks/mem_callbacks.o 00:02:14.583 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:14.583 LINK rpc_client_test 00:02:14.847 LINK spdk_nvme_discover 00:02:14.847 LINK interrupt_tgt 00:02:14.847 LINK histogram_perf 00:02:14.847 LINK spdk_trace_record 00:02:14.847 CXX test/cpp_headers/crc64.o 00:02:14.847 LINK zipf 00:02:14.847 LINK jsoncat 00:02:14.847 LINK nvmf_tgt 00:02:14.847 CXX test/cpp_headers/dif.o 00:02:14.847 LINK poller_perf 00:02:14.847 CXX test/cpp_headers/dma.o 00:02:14.847 LINK vtophys 00:02:14.847 CXX test/cpp_headers/endian.o 00:02:14.847 CXX test/cpp_headers/env_dpdk.o 00:02:14.847 CXX test/cpp_headers/env.o 00:02:14.847 CXX test/cpp_headers/event.o 00:02:14.847 CXX test/cpp_headers/fd_group.o 00:02:14.847 CXX test/cpp_headers/fd.o 00:02:14.847 LINK env_dpdk_post_init 00:02:14.847 CXX test/cpp_headers/file.o 00:02:14.847 CXX test/cpp_headers/ftl.o 00:02:14.847 LINK stub 00:02:14.847 LINK iscsi_tgt 00:02:14.847 CXX test/cpp_headers/gpt_spec.o 00:02:14.847 CXX test/cpp_headers/hexlify.o 00:02:14.847 CXX test/cpp_headers/histogram_data.o 00:02:14.847 LINK ioat_perf 00:02:14.847 CXX test/cpp_headers/idxd.o 00:02:14.847 LINK spdk_tgt 00:02:14.847 CXX test/cpp_headers/idxd_spec.o 00:02:14.847 LINK bdev_svc 00:02:14.847 LINK verify 00:02:14.847 CXX test/cpp_headers/init.o 00:02:14.847 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.847 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:15.113 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:15.113 CXX test/cpp_headers/ioat.o 00:02:15.113 CXX test/cpp_headers/ioat_spec.o 00:02:15.113 CXX test/cpp_headers/iscsi_spec.o 00:02:15.113 CXX test/cpp_headers/json.o 00:02:15.113 LINK spdk_trace 00:02:15.113 LINK spdk_dd 00:02:15.113 CXX test/cpp_headers/jsonrpc.o 00:02:15.113 CXX test/cpp_headers/keyring.o 00:02:15.113 CXX test/cpp_headers/keyring_module.o 00:02:15.113 CXX test/cpp_headers/likely.o 00:02:15.113 CXX test/cpp_headers/log.o 00:02:15.113 LINK pci_ut 00:02:15.375 CXX test/cpp_headers/lvol.o 00:02:15.375 CXX test/cpp_headers/memory.o 00:02:15.375 CXX test/cpp_headers/mmio.o 00:02:15.375 CXX test/cpp_headers/nbd.o 00:02:15.375 CXX test/cpp_headers/net.o 00:02:15.375 CXX test/cpp_headers/notify.o 00:02:15.375 CXX test/cpp_headers/nvme.o 00:02:15.375 CXX test/cpp_headers/nvme_intel.o 00:02:15.375 CXX test/cpp_headers/nvme_ocssd.o 00:02:15.375 LINK test_dma 00:02:15.375 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:15.375 CXX test/cpp_headers/nvme_spec.o 00:02:15.375 CXX test/cpp_headers/nvme_zns.o 00:02:15.375 CXX test/cpp_headers/nvmf_cmd.o 00:02:15.375 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:15.375 CXX test/cpp_headers/nvmf.o 00:02:15.375 CXX test/cpp_headers/nvmf_spec.o 00:02:15.375 CXX test/cpp_headers/nvmf_transport.o 00:02:15.375 CXX test/cpp_headers/opal.o 00:02:15.375 CC test/event/event_perf/event_perf.o 00:02:15.375 CXX test/cpp_headers/opal_spec.o 00:02:15.663 CC test/event/reactor/reactor.o 00:02:15.663 LINK nvme_fuzz 00:02:15.663 CC test/event/reactor_perf/reactor_perf.o 00:02:15.663 LINK spdk_bdev 00:02:15.663 CXX test/cpp_headers/pci_ids.o 00:02:15.663 CXX test/cpp_headers/pipe.o 00:02:15.663 CC examples/sock/hello_world/hello_sock.o 00:02:15.663 CC examples/vmd/lsvmd/lsvmd.o 00:02:15.663 CC examples/thread/thread/thread_ex.o 00:02:15.663 CC examples/idxd/perf/perf.o 00:02:15.663 LINK spdk_nvme 00:02:15.663 CXX test/cpp_headers/queue.o 00:02:15.663 CXX test/cpp_headers/reduce.o 00:02:15.663 CC test/event/app_repeat/app_repeat.o 00:02:15.663 CC examples/vmd/led/led.o 00:02:15.663 CXX test/cpp_headers/rpc.o 00:02:15.663 CXX test/cpp_headers/scheduler.o 00:02:15.663 CXX test/cpp_headers/scsi.o 00:02:15.663 CXX test/cpp_headers/sock.o 00:02:15.663 CXX test/cpp_headers/scsi_spec.o 00:02:15.663 CXX test/cpp_headers/stdinc.o 00:02:15.663 CXX test/cpp_headers/string.o 00:02:15.663 CXX test/cpp_headers/thread.o 00:02:15.663 CXX test/cpp_headers/trace.o 00:02:15.663 CXX test/cpp_headers/trace_parser.o 00:02:15.663 CXX test/cpp_headers/tree.o 00:02:15.663 CXX test/cpp_headers/ublk.o 00:02:15.663 CC test/event/scheduler/scheduler.o 00:02:15.663 CXX test/cpp_headers/util.o 00:02:15.663 CXX test/cpp_headers/uuid.o 00:02:15.663 CXX test/cpp_headers/version.o 00:02:15.941 CXX test/cpp_headers/vfio_user_pci.o 00:02:15.941 CXX test/cpp_headers/vfio_user_spec.o 00:02:15.941 CXX test/cpp_headers/vhost.o 00:02:15.941 LINK event_perf 00:02:15.941 CXX test/cpp_headers/vmd.o 00:02:15.941 CXX test/cpp_headers/xor.o 00:02:15.941 LINK mem_callbacks 00:02:15.941 LINK spdk_nvme_perf 00:02:15.941 LINK reactor 00:02:15.941 CXX test/cpp_headers/zipf.o 00:02:15.941 LINK vhost_fuzz 00:02:15.941 CC app/vhost/vhost.o 00:02:15.941 LINK reactor_perf 00:02:15.941 LINK lsvmd 00:02:15.941 LINK app_repeat 00:02:15.941 LINK led 00:02:15.941 LINK spdk_nvme_identify 00:02:15.941 LINK spdk_top 00:02:15.941 LINK hello_sock 00:02:16.200 LINK thread 00:02:16.200 CC test/nvme/reset/reset.o 00:02:16.200 CC test/nvme/overhead/overhead.o 00:02:16.200 CC test/nvme/aer/aer.o 00:02:16.200 CC test/nvme/e2edp/nvme_dp.o 00:02:16.200 CC test/nvme/sgl/sgl.o 00:02:16.200 CC test/nvme/err_injection/err_injection.o 00:02:16.200 CC test/nvme/startup/startup.o 00:02:16.200 CC test/accel/dif/dif.o 00:02:16.200 CC test/nvme/reserve/reserve.o 00:02:16.200 CC test/blobfs/mkfs/mkfs.o 00:02:16.200 CC test/nvme/simple_copy/simple_copy.o 00:02:16.200 CC test/nvme/connect_stress/connect_stress.o 00:02:16.200 CC test/nvme/boot_partition/boot_partition.o 00:02:16.200 LINK scheduler 00:02:16.200 CC test/nvme/compliance/nvme_compliance.o 00:02:16.200 CC test/nvme/fused_ordering/fused_ordering.o 00:02:16.200 CC test/lvol/esnap/esnap.o 00:02:16.200 LINK idxd_perf 00:02:16.200 CC test/nvme/fdp/fdp.o 00:02:16.200 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:16.200 CC test/nvme/cuse/cuse.o 00:02:16.200 LINK vhost 00:02:16.459 LINK reserve 00:02:16.460 LINK startup 00:02:16.460 LINK doorbell_aers 00:02:16.460 LINK sgl 00:02:16.460 LINK connect_stress 00:02:16.460 LINK err_injection 00:02:16.460 LINK fused_ordering 00:02:16.460 LINK simple_copy 00:02:16.460 LINK overhead 00:02:16.460 LINK mkfs 00:02:16.460 LINK boot_partition 00:02:16.460 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:16.460 CC examples/nvme/hotplug/hotplug.o 00:02:16.460 CC examples/nvme/arbitration/arbitration.o 00:02:16.460 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:16.460 CC examples/nvme/reconnect/reconnect.o 00:02:16.460 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:16.460 CC examples/nvme/hello_world/hello_world.o 00:02:16.460 CC examples/nvme/abort/abort.o 00:02:16.460 LINK reset 00:02:16.719 LINK nvme_compliance 00:02:16.719 CC examples/accel/perf/accel_perf.o 00:02:16.719 LINK aer 00:02:16.719 CC examples/blob/hello_world/hello_blob.o 00:02:16.719 LINK nvme_dp 00:02:16.719 CC examples/blob/cli/blobcli.o 00:02:16.719 LINK fdp 00:02:16.719 LINK memory_ut 00:02:16.719 LINK dif 00:02:16.719 LINK cmb_copy 00:02:16.719 LINK pmr_persistence 00:02:16.719 LINK hotplug 00:02:16.977 LINK hello_world 00:02:16.977 LINK hello_blob 00:02:16.977 LINK reconnect 00:02:16.977 LINK arbitration 00:02:16.977 LINK abort 00:02:17.236 LINK accel_perf 00:02:17.236 LINK nvme_manage 00:02:17.236 CC test/bdev/bdevio/bdevio.o 00:02:17.236 LINK blobcli 00:02:17.494 LINK iscsi_fuzz 00:02:17.494 CC examples/bdev/hello_world/hello_bdev.o 00:02:17.494 CC examples/bdev/bdevperf/bdevperf.o 00:02:17.494 LINK bdevio 00:02:17.750 LINK hello_bdev 00:02:18.007 LINK cuse 00:02:18.265 LINK bdevperf 00:02:18.522 CC examples/nvmf/nvmf/nvmf.o 00:02:18.779 LINK nvmf 00:02:21.302 LINK esnap 00:02:21.560 00:02:21.560 real 0m48.901s 00:02:21.560 user 10m12.554s 00:02:21.560 sys 2m30.628s 00:02:21.560 00:55:37 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:21.560 00:55:37 make -- common/autotest_common.sh@10 -- $ set +x 00:02:21.560 ************************************ 00:02:21.560 END TEST make 00:02:21.560 ************************************ 00:02:21.560 00:55:37 -- common/autotest_common.sh@1142 -- $ return 0 00:02:21.560 00:55:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:21.560 00:55:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:21.560 00:55:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:21.560 00:55:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.560 00:55:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:21.560 00:55:37 -- pm/common@44 -- $ pid=3940658 00:02:21.560 00:55:37 -- pm/common@50 -- $ kill -TERM 3940658 00:02:21.560 00:55:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.560 00:55:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:21.560 00:55:37 -- pm/common@44 -- $ pid=3940659 00:02:21.560 00:55:37 -- pm/common@50 -- $ kill -TERM 3940659 00:02:21.560 00:55:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.560 00:55:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:21.560 00:55:37 -- pm/common@44 -- $ pid=3940661 00:02:21.560 00:55:37 -- pm/common@50 -- $ kill -TERM 3940661 00:02:21.560 00:55:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.560 00:55:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:21.560 00:55:37 -- pm/common@44 -- $ pid=3940690 00:02:21.560 00:55:37 -- pm/common@50 -- $ sudo -E kill -TERM 3940690 00:02:21.560 00:55:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:21.560 00:55:37 -- nvmf/common.sh@7 -- # uname -s 00:02:21.560 00:55:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:21.560 00:55:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:21.560 00:55:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:21.560 00:55:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:21.560 00:55:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:21.560 00:55:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:21.560 00:55:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:21.560 00:55:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:21.560 00:55:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:21.560 00:55:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:21.560 00:55:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:21.560 00:55:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:21.560 00:55:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:21.560 00:55:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:21.560 00:55:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:21.560 00:55:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:21.560 00:55:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:21.560 00:55:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:21.560 00:55:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.560 00:55:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.560 00:55:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.561 00:55:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.561 00:55:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.561 00:55:37 -- paths/export.sh@5 -- # export PATH 00:02:21.561 00:55:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.561 00:55:37 -- nvmf/common.sh@47 -- # : 0 00:02:21.561 00:55:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:21.561 00:55:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:21.561 00:55:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:21.561 00:55:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:21.561 00:55:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:21.561 00:55:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:21.561 00:55:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:21.561 00:55:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:21.561 00:55:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:21.561 00:55:37 -- spdk/autotest.sh@32 -- # uname -s 00:02:21.561 00:55:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:21.561 00:55:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:21.561 00:55:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:21.561 00:55:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:21.561 00:55:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:21.561 00:55:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:21.561 00:55:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:21.561 00:55:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:21.561 00:55:37 -- spdk/autotest.sh@48 -- # udevadm_pid=3996191 00:02:21.561 00:55:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:21.561 00:55:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:21.561 00:55:37 -- pm/common@17 -- # local monitor 00:02:21.561 00:55:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.561 00:55:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.561 00:55:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.561 00:55:37 -- pm/common@21 -- # date +%s 00:02:21.561 00:55:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.561 00:55:37 -- pm/common@21 -- # date +%s 00:02:21.561 00:55:37 -- pm/common@25 -- # sleep 1 00:02:21.561 00:55:37 -- pm/common@21 -- # date +%s 00:02:21.561 00:55:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084137 00:02:21.561 00:55:37 -- pm/common@21 -- # date +%s 00:02:21.561 00:55:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084137 00:02:21.561 00:55:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084137 00:02:21.561 00:55:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084137 00:02:21.561 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084137_collect-vmstat.pm.log 00:02:21.561 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084137_collect-cpu-load.pm.log 00:02:21.561 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084137_collect-cpu-temp.pm.log 00:02:21.820 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084137_collect-bmc-pm.bmc.pm.log 00:02:22.758 00:55:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:22.758 00:55:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:22.758 00:55:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:22.758 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:02:22.758 00:55:38 -- spdk/autotest.sh@59 -- # create_test_list 00:02:22.758 00:55:38 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:22.758 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:02:22.758 00:55:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:22.758 00:55:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.758 00:55:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.758 00:55:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:22.758 00:55:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.758 00:55:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:22.758 00:55:38 -- common/autotest_common.sh@1455 -- # uname 00:02:22.758 00:55:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:22.758 00:55:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:22.758 00:55:38 -- common/autotest_common.sh@1475 -- # uname 00:02:22.758 00:55:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:22.758 00:55:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:22.758 00:55:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:22.758 00:55:38 -- spdk/autotest.sh@72 -- # hash lcov 00:02:22.758 00:55:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:22.758 00:55:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:22.758 --rc lcov_branch_coverage=1 00:02:22.758 --rc lcov_function_coverage=1 00:02:22.758 --rc genhtml_branch_coverage=1 00:02:22.758 --rc genhtml_function_coverage=1 00:02:22.758 --rc genhtml_legend=1 00:02:22.758 --rc geninfo_all_blocks=1 00:02:22.758 ' 00:02:22.758 00:55:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:22.758 --rc lcov_branch_coverage=1 00:02:22.758 --rc lcov_function_coverage=1 00:02:22.758 --rc genhtml_branch_coverage=1 00:02:22.758 --rc genhtml_function_coverage=1 00:02:22.758 --rc genhtml_legend=1 00:02:22.758 --rc geninfo_all_blocks=1 00:02:22.758 ' 00:02:22.758 00:55:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:22.758 --rc lcov_branch_coverage=1 00:02:22.758 --rc lcov_function_coverage=1 00:02:22.758 --rc genhtml_branch_coverage=1 00:02:22.758 --rc genhtml_function_coverage=1 00:02:22.758 --rc genhtml_legend=1 00:02:22.758 --rc geninfo_all_blocks=1 00:02:22.758 --no-external' 00:02:22.758 00:55:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:22.758 --rc lcov_branch_coverage=1 00:02:22.758 --rc lcov_function_coverage=1 00:02:22.758 --rc genhtml_branch_coverage=1 00:02:22.758 --rc genhtml_function_coverage=1 00:02:22.758 --rc genhtml_legend=1 00:02:22.758 --rc geninfo_all_blocks=1 00:02:22.758 --no-external' 00:02:22.758 00:55:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:22.758 lcov: LCOV version 1.14 00:02:22.758 00:55:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:37.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:37.641 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:52.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:52.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:52.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:55.850 00:56:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:55.850 00:56:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:55.850 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:02:55.850 00:56:11 -- spdk/autotest.sh@91 -- # rm -f 00:02:55.850 00:56:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.820 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:56.820 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:56.820 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:56.820 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:56.820 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:56.820 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:56.820 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:56.820 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:56.820 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:02:56.820 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:56.820 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:57.078 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:57.078 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:57.078 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:57.078 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:57.078 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:57.078 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:57.078 00:56:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:57.078 00:56:12 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:57.078 00:56:12 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:57.078 00:56:12 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:57.078 00:56:12 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:57.078 00:56:12 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:57.078 00:56:12 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:57.078 00:56:12 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:57.078 00:56:12 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:57.078 00:56:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:57.078 00:56:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:57.078 00:56:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:57.078 00:56:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:57.078 00:56:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:57.078 00:56:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:57.078 No valid GPT data, bailing 00:02:57.078 00:56:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:57.078 00:56:13 -- scripts/common.sh@391 -- # pt= 00:02:57.078 00:56:13 -- scripts/common.sh@392 -- # return 1 00:02:57.078 00:56:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:57.078 1+0 records in 00:02:57.078 1+0 records out 00:02:57.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00190876 s, 549 MB/s 00:02:57.078 00:56:13 -- spdk/autotest.sh@118 -- # sync 00:02:57.078 00:56:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:57.078 00:56:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:57.078 00:56:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:58.979 00:56:14 -- spdk/autotest.sh@124 -- # uname -s 00:02:58.979 00:56:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:58.979 00:56:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:58.979 00:56:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.979 00:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.979 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:02:58.979 ************************************ 00:02:58.979 START TEST setup.sh 00:02:58.979 ************************************ 00:02:58.979 00:56:14 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.237 * Looking for test storage... 00:02:59.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.237 00:56:15 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:59.237 00:56:15 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:59.237 00:56:15 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.237 00:56:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.237 00:56:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.237 00:56:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.237 ************************************ 00:02:59.237 START TEST acl 00:02:59.237 ************************************ 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.237 * Looking for test storage... 00:02:59.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.237 00:56:15 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.237 00:56:15 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:59.237 00:56:15 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:59.237 00:56:15 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:59.237 00:56:15 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:59.237 00:56:15 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:59.237 00:56:15 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:59.237 00:56:15 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.237 00:56:15 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.136 00:56:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:01.136 00:56:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:01.136 00:56:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.136 00:56:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:01.136 00:56:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.136 00:56:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:02.071 Hugepages 00:03:02.071 node hugesize free / total 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:03:02.071 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.071 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:02.072 00:56:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:02.072 00:56:17 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.072 00:56:17 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.072 00:56:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:02.072 ************************************ 00:03:02.072 START TEST denied 00:03:02.072 ************************************ 00:03:02.072 00:56:17 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:02.072 00:56:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:03:02.072 00:56:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:03:02.072 00:56:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:02.072 00:56:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.072 00:56:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.971 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.971 00:56:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.498 00:03:06.498 real 0m4.048s 00:03:06.498 user 0m1.183s 00:03:06.498 sys 0m1.912s 00:03:06.498 00:56:21 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.498 00:56:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:06.498 ************************************ 00:03:06.498 END TEST denied 00:03:06.498 ************************************ 00:03:06.498 00:56:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:06.498 00:56:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:06.498 00:56:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.498 00:56:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.498 00:56:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.498 ************************************ 00:03:06.498 START TEST allowed 00:03:06.498 ************************************ 00:03:06.498 00:56:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:06.498 00:56:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:03:06.498 00:56:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:06.498 00:56:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:03:06.498 00:56:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.498 00:56:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.400 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.400 00:56:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:08.400 00:56:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:08.400 00:56:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:08.400 00:56:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.400 00:56:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.304 00:03:10.304 real 0m3.861s 00:03:10.304 user 0m1.077s 00:03:10.304 sys 0m1.690s 00:03:10.304 00:56:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.304 00:56:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:10.304 ************************************ 00:03:10.304 END TEST allowed 00:03:10.304 ************************************ 00:03:10.304 00:56:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:10.304 00:03:10.304 real 0m10.847s 00:03:10.304 user 0m3.381s 00:03:10.304 sys 0m5.484s 00:03:10.304 00:56:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.304 00:56:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:10.304 ************************************ 00:03:10.304 END TEST acl 00:03:10.304 ************************************ 00:03:10.304 00:56:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:10.304 00:56:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:10.304 00:56:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.304 00:56:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.304 00:56:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.304 ************************************ 00:03:10.304 START TEST hugepages 00:03:10.304 ************************************ 00:03:10.304 00:56:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:10.304 * Looking for test storage... 00:03:10.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.304 00:56:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:10.304 00:56:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:10.304 00:56:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:10.304 00:56:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:10.304 00:56:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 39693980 kB' 'MemAvailable: 43240424 kB' 'Buffers: 2704 kB' 'Cached: 14372872 kB' 'SwapCached: 0 kB' 'Active: 11323788 kB' 'Inactive: 3510472 kB' 'Active(anon): 10887372 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461948 kB' 'Mapped: 216088 kB' 'Shmem: 10428688 kB' 'KReclaimable: 186792 kB' 'Slab: 541164 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354372 kB' 'KernelStack: 12768 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 12003448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.305 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:10.306 00:56:26 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:10.306 00:56:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.306 00:56:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.306 00:56:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.306 ************************************ 00:03:10.306 START TEST default_setup 00:03:10.306 ************************************ 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:10.306 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.307 00:56:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.683 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.683 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.683 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.683 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.683 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.683 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.683 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.683 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.683 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:12.627 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41805608 kB' 'MemAvailable: 45352052 kB' 'Buffers: 2704 kB' 'Cached: 14372960 kB' 'SwapCached: 0 kB' 'Active: 11341096 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904680 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479528 kB' 'Mapped: 215972 kB' 'Shmem: 10428776 kB' 'KReclaimable: 186792 kB' 'Slab: 540976 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354184 kB' 'KernelStack: 12688 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.627 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41806252 kB' 'MemAvailable: 45352696 kB' 'Buffers: 2704 kB' 'Cached: 14372964 kB' 'SwapCached: 0 kB' 'Active: 11341520 kB' 'Inactive: 3510472 kB' 'Active(anon): 10905104 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479544 kB' 'Mapped: 216104 kB' 'Shmem: 10428780 kB' 'KReclaimable: 186792 kB' 'Slab: 540952 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354160 kB' 'KernelStack: 12720 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41806508 kB' 'MemAvailable: 45352952 kB' 'Buffers: 2704 kB' 'Cached: 14372980 kB' 'SwapCached: 0 kB' 'Active: 11341044 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904628 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479036 kB' 'Mapped: 216104 kB' 'Shmem: 10428796 kB' 'KReclaimable: 186792 kB' 'Slab: 541032 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354240 kB' 'KernelStack: 12704 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.632 nr_hugepages=1024 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.632 resv_hugepages=0 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.632 surplus_hugepages=0 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.632 anon_hugepages=0 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41806256 kB' 'MemAvailable: 45352700 kB' 'Buffers: 2704 kB' 'Cached: 14373004 kB' 'SwapCached: 0 kB' 'Active: 11341124 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904708 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479120 kB' 'Mapped: 216104 kB' 'Shmem: 10428820 kB' 'KReclaimable: 186792 kB' 'Slab: 541000 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354208 kB' 'KernelStack: 12736 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 25555544 kB' 'MemUsed: 7321396 kB' 'SwapCached: 0 kB' 'Active: 3971092 kB' 'Inactive: 200196 kB' 'Active(anon): 3797960 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 200196 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4082884 kB' 'Mapped: 100840 kB' 'AnonPages: 91600 kB' 'Shmem: 3709556 kB' 'KernelStack: 6920 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 259920 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 198364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.634 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.636 node0=1024 expecting 1024 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.636 00:03:12.636 real 0m2.546s 00:03:12.636 user 0m0.687s 00:03:12.636 sys 0m0.958s 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.636 00:56:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:12.636 ************************************ 00:03:12.636 END TEST default_setup 00:03:12.636 ************************************ 00:03:12.895 00:56:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:12.895 00:56:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:12.895 00:56:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.895 00:56:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.895 00:56:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.895 ************************************ 00:03:12.895 START TEST per_node_1G_alloc 00:03:12.895 ************************************ 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.895 00:56:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.828 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.828 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.828 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.828 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.828 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.828 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.828 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.828 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.828 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.828 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.828 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.828 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.828 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.828 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.828 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.828 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.828 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41796480 kB' 'MemAvailable: 45342924 kB' 'Buffers: 2704 kB' 'Cached: 14373068 kB' 'SwapCached: 0 kB' 'Active: 11341092 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904676 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478968 kB' 'Mapped: 216140 kB' 'Shmem: 10428884 kB' 'KReclaimable: 186792 kB' 'Slab: 541000 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354208 kB' 'KernelStack: 12736 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.091 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.092 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.093 00:56:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.093 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41796184 kB' 'MemAvailable: 45342628 kB' 'Buffers: 2704 kB' 'Cached: 14373068 kB' 'SwapCached: 0 kB' 'Active: 11341828 kB' 'Inactive: 3510472 kB' 'Active(anon): 10905412 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479732 kB' 'Mapped: 216116 kB' 'Shmem: 10428884 kB' 'KReclaimable: 186792 kB' 'Slab: 540956 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354164 kB' 'KernelStack: 12816 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.094 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.095 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41799024 kB' 'MemAvailable: 45345468 kB' 'Buffers: 2704 kB' 'Cached: 14373088 kB' 'SwapCached: 0 kB' 'Active: 11341220 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904804 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479192 kB' 'Mapped: 216116 kB' 'Shmem: 10428904 kB' 'KReclaimable: 186792 kB' 'Slab: 541032 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354240 kB' 'KernelStack: 12720 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.096 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.097 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.098 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.099 nr_hugepages=1024 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.099 resv_hugepages=0 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.099 surplus_hugepages=0 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.099 anon_hugepages=0 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41798772 kB' 'MemAvailable: 45345220 kB' 'Buffers: 2704 kB' 'Cached: 14373112 kB' 'SwapCached: 0 kB' 'Active: 11341124 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904708 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479016 kB' 'Mapped: 216116 kB' 'Shmem: 10428928 kB' 'KReclaimable: 186800 kB' 'Slab: 541040 kB' 'SReclaimable: 186800 kB' 'SUnreclaim: 354240 kB' 'KernelStack: 12736 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12020608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.099 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.100 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.101 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 26601516 kB' 'MemUsed: 6275424 kB' 'SwapCached: 0 kB' 'Active: 3970792 kB' 'Inactive: 200196 kB' 'Active(anon): 3797660 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 200196 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4082956 kB' 'Mapped: 100840 kB' 'AnonPages: 91220 kB' 'Shmem: 3709628 kB' 'KernelStack: 6920 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61564 kB' 'Slab: 259932 kB' 'SReclaimable: 61564 kB' 'SUnreclaim: 198368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.363 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.364 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 15196532 kB' 'MemUsed: 12468252 kB' 'SwapCached: 0 kB' 'Active: 7370692 kB' 'Inactive: 3310276 kB' 'Active(anon): 7107408 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3310276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10292924 kB' 'Mapped: 115276 kB' 'AnonPages: 388160 kB' 'Shmem: 6719364 kB' 'KernelStack: 5864 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125236 kB' 'Slab: 281108 kB' 'SReclaimable: 125236 kB' 'SUnreclaim: 155872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.365 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.366 node0=512 expecting 512 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:14.366 node1=512 expecting 512 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:14.366 00:03:14.366 real 0m1.477s 00:03:14.366 user 0m0.615s 00:03:14.366 sys 0m0.825s 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.366 00:56:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:14.366 ************************************ 00:03:14.366 END TEST per_node_1G_alloc 00:03:14.366 ************************************ 00:03:14.366 00:56:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:14.366 00:56:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:14.366 00:56:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.366 00:56:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.366 00:56:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.366 ************************************ 00:03:14.366 START TEST even_2G_alloc 00:03:14.366 ************************************ 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.366 00:56:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.747 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.747 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.747 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.747 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.747 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.747 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.747 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.747 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.747 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.747 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.747 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.747 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.747 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.747 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.747 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.747 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.747 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.747 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41796144 kB' 'MemAvailable: 45342588 kB' 'Buffers: 2704 kB' 'Cached: 14373212 kB' 'SwapCached: 0 kB' 'Active: 11347100 kB' 'Inactive: 3510472 kB' 'Active(anon): 10910684 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484856 kB' 'Mapped: 216592 kB' 'Shmem: 10429028 kB' 'KReclaimable: 186792 kB' 'Slab: 541308 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354516 kB' 'KernelStack: 12768 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12027464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.748 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41796004 kB' 'MemAvailable: 45342448 kB' 'Buffers: 2704 kB' 'Cached: 14373212 kB' 'SwapCached: 0 kB' 'Active: 11342412 kB' 'Inactive: 3510472 kB' 'Active(anon): 10905996 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480160 kB' 'Mapped: 216564 kB' 'Shmem: 10429028 kB' 'KReclaimable: 186792 kB' 'Slab: 541296 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354504 kB' 'KernelStack: 12784 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12023112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.749 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.750 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41789368 kB' 'MemAvailable: 45335812 kB' 'Buffers: 2704 kB' 'Cached: 14373232 kB' 'SwapCached: 0 kB' 'Active: 11347008 kB' 'Inactive: 3510472 kB' 'Active(anon): 10910592 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484748 kB' 'Mapped: 216564 kB' 'Shmem: 10429048 kB' 'KReclaimable: 186792 kB' 'Slab: 541396 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354604 kB' 'KernelStack: 12800 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12027500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196100 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.751 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.752 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.753 nr_hugepages=1024 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.753 resv_hugepages=0 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.753 surplus_hugepages=0 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.753 anon_hugepages=0 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41790056 kB' 'MemAvailable: 45336500 kB' 'Buffers: 2704 kB' 'Cached: 14373256 kB' 'SwapCached: 0 kB' 'Active: 11347392 kB' 'Inactive: 3510472 kB' 'Active(anon): 10910976 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485132 kB' 'Mapped: 217020 kB' 'Shmem: 10429072 kB' 'KReclaimable: 186792 kB' 'Slab: 541396 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354604 kB' 'KernelStack: 12800 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12027524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196100 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.753 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.754 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 26603104 kB' 'MemUsed: 6273836 kB' 'SwapCached: 0 kB' 'Active: 3970932 kB' 'Inactive: 200196 kB' 'Active(anon): 3797800 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 200196 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4083016 kB' 'Mapped: 100860 kB' 'AnonPages: 91320 kB' 'Shmem: 3709688 kB' 'KernelStack: 6888 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 260064 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 198508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.755 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 15190152 kB' 'MemUsed: 12474632 kB' 'SwapCached: 0 kB' 'Active: 7370928 kB' 'Inactive: 3310276 kB' 'Active(anon): 7107644 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3310276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10292960 kB' 'Mapped: 115288 kB' 'AnonPages: 388340 kB' 'Shmem: 6719400 kB' 'KernelStack: 5896 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125236 kB' 'Slab: 281332 kB' 'SReclaimable: 125236 kB' 'SUnreclaim: 156096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.756 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.757 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.757 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.757 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.757 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.757 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.757 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.016 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:16.017 node0=512 expecting 512 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:16.017 node1=512 expecting 512 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:16.017 00:03:16.017 real 0m1.575s 00:03:16.017 user 0m0.651s 00:03:16.017 sys 0m0.890s 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.017 00:56:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.017 ************************************ 00:03:16.017 END TEST even_2G_alloc 00:03:16.017 ************************************ 00:03:16.017 00:56:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.017 00:56:31 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:16.017 00:56:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.017 00:56:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.017 00:56:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.017 ************************************ 00:03:16.017 START TEST odd_alloc 00:03:16.017 ************************************ 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.017 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:16.018 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:16.018 00:56:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:16.018 00:56:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.018 00:56:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.398 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.398 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.398 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.398 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.398 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.398 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.398 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.398 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.398 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.398 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.398 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.398 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.398 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.398 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.398 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.398 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.398 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41807080 kB' 'MemAvailable: 45353524 kB' 'Buffers: 2704 kB' 'Cached: 14373344 kB' 'SwapCached: 0 kB' 'Active: 11338600 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902184 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476204 kB' 'Mapped: 215100 kB' 'Shmem: 10429160 kB' 'KReclaimable: 186792 kB' 'Slab: 541380 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354588 kB' 'KernelStack: 12768 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 12006988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.398 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.399 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41807532 kB' 'MemAvailable: 45353976 kB' 'Buffers: 2704 kB' 'Cached: 14373348 kB' 'SwapCached: 0 kB' 'Active: 11339196 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902780 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476900 kB' 'Mapped: 215068 kB' 'Shmem: 10429164 kB' 'KReclaimable: 186792 kB' 'Slab: 541380 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354588 kB' 'KernelStack: 12816 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 12006756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.400 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41809100 kB' 'MemAvailable: 45355544 kB' 'Buffers: 2704 kB' 'Cached: 14373364 kB' 'SwapCached: 0 kB' 'Active: 11339276 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902860 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476716 kB' 'Mapped: 215068 kB' 'Shmem: 10429180 kB' 'KReclaimable: 186792 kB' 'Slab: 541372 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354580 kB' 'KernelStack: 12800 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 12006408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.401 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.402 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:17.403 nr_hugepages=1025 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.403 resv_hugepages=0 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.403 surplus_hugepages=0 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.403 anon_hugepages=0 00:03:17.403 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41808904 kB' 'MemAvailable: 45355348 kB' 'Buffers: 2704 kB' 'Cached: 14373364 kB' 'SwapCached: 0 kB' 'Active: 11338688 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902272 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476332 kB' 'Mapped: 215068 kB' 'Shmem: 10429180 kB' 'KReclaimable: 186792 kB' 'Slab: 541396 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 354604 kB' 'KernelStack: 12752 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 12006432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 26618640 kB' 'MemUsed: 6258300 kB' 'SwapCached: 0 kB' 'Active: 3969436 kB' 'Inactive: 200196 kB' 'Active(anon): 3796304 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 200196 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4083148 kB' 'Mapped: 100840 kB' 'AnonPages: 89640 kB' 'Shmem: 3709820 kB' 'KernelStack: 6888 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 260040 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 198484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.406 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 15190652 kB' 'MemUsed: 12474132 kB' 'SwapCached: 0 kB' 'Active: 7369192 kB' 'Inactive: 3310276 kB' 'Active(anon): 7105908 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3310276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10292964 kB' 'Mapped: 114228 kB' 'AnonPages: 386584 kB' 'Shmem: 6719404 kB' 'KernelStack: 5880 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125236 kB' 'Slab: 281356 kB' 'SReclaimable: 125236 kB' 'SUnreclaim: 156120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:17.408 node0=512 expecting 513 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:17.408 node1=513 expecting 512 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:17.408 00:03:17.408 real 0m1.553s 00:03:17.408 user 0m0.616s 00:03:17.408 sys 0m0.900s 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.408 00:56:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.408 ************************************ 00:03:17.408 END TEST odd_alloc 00:03:17.408 ************************************ 00:03:17.408 00:56:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.408 00:56:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:17.408 00:56:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.408 00:56:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.408 00:56:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.665 ************************************ 00:03:17.665 START TEST custom_alloc 00:03:17.665 ************************************ 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.665 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.666 00:56:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.046 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.046 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.046 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.046 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.046 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.046 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.046 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.046 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.046 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.046 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.046 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.046 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.046 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.046 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.046 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.046 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.046 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40755760 kB' 'MemAvailable: 44302204 kB' 'Buffers: 2704 kB' 'Cached: 14373480 kB' 'SwapCached: 0 kB' 'Active: 11339092 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902676 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476472 kB' 'Mapped: 215120 kB' 'Shmem: 10429296 kB' 'KReclaimable: 186792 kB' 'Slab: 540720 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353928 kB' 'KernelStack: 12752 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 12007000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.046 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40759320 kB' 'MemAvailable: 44305764 kB' 'Buffers: 2704 kB' 'Cached: 14373480 kB' 'SwapCached: 0 kB' 'Active: 11338772 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902356 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476176 kB' 'Mapped: 215084 kB' 'Shmem: 10429296 kB' 'KReclaimable: 186792 kB' 'Slab: 540728 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353936 kB' 'KernelStack: 12768 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 12007020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40759880 kB' 'MemAvailable: 44306324 kB' 'Buffers: 2704 kB' 'Cached: 14373480 kB' 'SwapCached: 0 kB' 'Active: 11338888 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902472 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476292 kB' 'Mapped: 215084 kB' 'Shmem: 10429296 kB' 'KReclaimable: 186792 kB' 'Slab: 540756 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353964 kB' 'KernelStack: 12752 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 12007040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.048 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:19.050 nr_hugepages=1536 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.050 resv_hugepages=0 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.050 surplus_hugepages=0 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.050 anon_hugepages=0 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40759880 kB' 'MemAvailable: 44306324 kB' 'Buffers: 2704 kB' 'Cached: 14373484 kB' 'SwapCached: 0 kB' 'Active: 11338684 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902268 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476084 kB' 'Mapped: 215084 kB' 'Shmem: 10429300 kB' 'KReclaimable: 186792 kB' 'Slab: 540756 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353964 kB' 'KernelStack: 12768 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 12007064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.050 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 26605628 kB' 'MemUsed: 6271312 kB' 'SwapCached: 0 kB' 'Active: 3969548 kB' 'Inactive: 200196 kB' 'Active(anon): 3796416 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 200196 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4083212 kB' 'Mapped: 100840 kB' 'AnonPages: 89640 kB' 'Shmem: 3709884 kB' 'KernelStack: 6904 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 259600 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 198044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.051 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 14154896 kB' 'MemUsed: 13509888 kB' 'SwapCached: 0 kB' 'Active: 7369312 kB' 'Inactive: 3310276 kB' 'Active(anon): 7106028 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3310276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10292976 kB' 'Mapped: 114244 kB' 'AnonPages: 386616 kB' 'Shmem: 6719416 kB' 'KernelStack: 5864 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125236 kB' 'Slab: 281156 kB' 'SReclaimable: 125236 kB' 'SUnreclaim: 155920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.052 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.053 node0=512 expecting 512 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:19.053 node1=1024 expecting 1024 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:19.053 00:03:19.053 real 0m1.586s 00:03:19.053 user 0m0.690s 00:03:19.053 sys 0m0.860s 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.053 00:56:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.053 ************************************ 00:03:19.053 END TEST custom_alloc 00:03:19.053 ************************************ 00:03:19.053 00:56:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:19.053 00:56:35 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:19.053 00:56:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.053 00:56:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.053 00:56:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.053 ************************************ 00:03:19.053 START TEST no_shrink_alloc 00:03:19.053 ************************************ 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.053 00:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.430 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.430 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.430 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.430 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.430 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.430 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.430 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.430 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.430 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.430 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.430 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.430 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.430 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.430 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.430 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.430 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.430 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.430 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41779556 kB' 'MemAvailable: 45326000 kB' 'Buffers: 2704 kB' 'Cached: 14373604 kB' 'SwapCached: 0 kB' 'Active: 11340336 kB' 'Inactive: 3510472 kB' 'Active(anon): 10903920 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477736 kB' 'Mapped: 215192 kB' 'Shmem: 10429420 kB' 'KReclaimable: 186792 kB' 'Slab: 540636 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353844 kB' 'KernelStack: 12880 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12009972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41779468 kB' 'MemAvailable: 45325912 kB' 'Buffers: 2704 kB' 'Cached: 14373604 kB' 'SwapCached: 0 kB' 'Active: 11340664 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904248 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477608 kB' 'Mapped: 215192 kB' 'Shmem: 10429420 kB' 'KReclaimable: 186792 kB' 'Slab: 540632 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353840 kB' 'KernelStack: 13264 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12009992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41779124 kB' 'MemAvailable: 45325568 kB' 'Buffers: 2704 kB' 'Cached: 14373608 kB' 'SwapCached: 0 kB' 'Active: 11339584 kB' 'Inactive: 3510472 kB' 'Active(anon): 10903168 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476932 kB' 'Mapped: 215200 kB' 'Shmem: 10429424 kB' 'KReclaimable: 186792 kB' 'Slab: 540632 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353840 kB' 'KernelStack: 12976 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12008664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.436 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.437 nr_hugepages=1024 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.437 resv_hugepages=0 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.437 surplus_hugepages=0 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.437 anon_hugepages=0 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41775904 kB' 'MemAvailable: 45322348 kB' 'Buffers: 2704 kB' 'Cached: 14373608 kB' 'SwapCached: 0 kB' 'Active: 11340680 kB' 'Inactive: 3510472 kB' 'Active(anon): 10904264 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477944 kB' 'Mapped: 215124 kB' 'Shmem: 10429424 kB' 'KReclaimable: 186792 kB' 'Slab: 540620 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353828 kB' 'KernelStack: 13184 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12010036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196336 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.697 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.698 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 25551836 kB' 'MemUsed: 7325104 kB' 'SwapCached: 0 kB' 'Active: 3969476 kB' 'Inactive: 200196 kB' 'Active(anon): 3796344 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 200196 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4083364 kB' 'Mapped: 100864 kB' 'AnonPages: 89312 kB' 'Shmem: 3710036 kB' 'KernelStack: 7000 kB' 'PageTables: 3592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 259536 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 197980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.699 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.700 node0=1024 expecting 1024 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.700 00:56:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.636 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.636 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.636 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.636 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.636 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.636 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.636 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.636 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.636 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.636 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.636 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.636 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.636 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.636 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.636 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.637 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.637 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.901 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41772056 kB' 'MemAvailable: 45318500 kB' 'Buffers: 2704 kB' 'Cached: 14373720 kB' 'SwapCached: 0 kB' 'Active: 11339540 kB' 'Inactive: 3510472 kB' 'Active(anon): 10903124 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476796 kB' 'Mapped: 215128 kB' 'Shmem: 10429536 kB' 'KReclaimable: 186792 kB' 'Slab: 540528 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353736 kB' 'KernelStack: 12784 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12008020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.901 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41774508 kB' 'MemAvailable: 45320952 kB' 'Buffers: 2704 kB' 'Cached: 14373724 kB' 'SwapCached: 0 kB' 'Active: 11339456 kB' 'Inactive: 3510472 kB' 'Active(anon): 10903040 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476796 kB' 'Mapped: 215184 kB' 'Shmem: 10429540 kB' 'KReclaimable: 186792 kB' 'Slab: 540560 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353768 kB' 'KernelStack: 12832 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12008036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.902 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.903 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41773840 kB' 'MemAvailable: 45320284 kB' 'Buffers: 2704 kB' 'Cached: 14373744 kB' 'SwapCached: 0 kB' 'Active: 11339416 kB' 'Inactive: 3510472 kB' 'Active(anon): 10903000 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476632 kB' 'Mapped: 215108 kB' 'Shmem: 10429560 kB' 'KReclaimable: 186792 kB' 'Slab: 540556 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353764 kB' 'KernelStack: 12832 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12008060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.906 nr_hugepages=1024 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.906 resv_hugepages=0 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.906 surplus_hugepages=0 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.906 anon_hugepages=0 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41774452 kB' 'MemAvailable: 45320896 kB' 'Buffers: 2704 kB' 'Cached: 14373764 kB' 'SwapCached: 0 kB' 'Active: 11339392 kB' 'Inactive: 3510472 kB' 'Active(anon): 10902976 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476632 kB' 'Mapped: 215108 kB' 'Shmem: 10429580 kB' 'KReclaimable: 186792 kB' 'Slab: 540556 kB' 'SReclaimable: 186792 kB' 'SUnreclaim: 353764 kB' 'KernelStack: 12832 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 12008080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1646172 kB' 'DirectMap2M: 20293632 kB' 'DirectMap1G: 47185920 kB' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 25539544 kB' 'MemUsed: 7337396 kB' 'SwapCached: 0 kB' 'Active: 3969316 kB' 'Inactive: 200196 kB' 'Active(anon): 3796184 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 200196 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4083428 kB' 'Mapped: 100840 kB' 'AnonPages: 89280 kB' 'Shmem: 3710100 kB' 'KernelStack: 6968 kB' 'PageTables: 3400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 259488 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 197932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.909 node0=1024 expecting 1024 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.909 00:03:21.909 real 0m2.851s 00:03:21.909 user 0m1.169s 00:03:21.909 sys 0m1.598s 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.909 00:56:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.909 ************************************ 00:03:21.909 END TEST no_shrink_alloc 00:03:21.909 ************************************ 00:03:22.168 00:56:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:22.168 00:56:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:22.168 00:03:22.168 real 0m11.991s 00:03:22.168 user 0m4.595s 00:03:22.168 sys 0m6.292s 00:03:22.168 00:56:37 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.168 00:56:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.168 ************************************ 00:03:22.168 END TEST hugepages 00:03:22.168 ************************************ 00:03:22.168 00:56:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:22.168 00:56:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:22.168 00:56:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.168 00:56:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.168 00:56:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.168 ************************************ 00:03:22.168 START TEST driver 00:03:22.168 ************************************ 00:03:22.168 00:56:37 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:22.168 * Looking for test storage... 00:03:22.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:22.168 00:56:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:22.168 00:56:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.168 00:56:38 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.705 00:56:40 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:24.705 00:56:40 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.705 00:56:40 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.705 00:56:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.705 ************************************ 00:03:24.705 START TEST guess_driver 00:03:24.705 ************************************ 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:24.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:24.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:24.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:24.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:24.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:24.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:24.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:24.705 Looking for driver=vfio-pci 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.705 00:56:40 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.078 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.079 00:56:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.213 00:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.213 00:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.213 00:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.213 00:56:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:27.213 00:56:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:27.213 00:56:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.213 00:56:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.740 00:03:29.740 real 0m5.033s 00:03:29.740 user 0m1.077s 00:03:29.740 sys 0m1.943s 00:03:29.740 00:56:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.740 00:56:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.740 ************************************ 00:03:29.740 END TEST guess_driver 00:03:29.740 ************************************ 00:03:29.740 00:56:45 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:29.740 00:03:29.740 real 0m7.726s 00:03:29.740 user 0m1.659s 00:03:29.740 sys 0m3.007s 00:03:29.740 00:56:45 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.740 00:56:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.740 ************************************ 00:03:29.740 END TEST driver 00:03:29.740 ************************************ 00:03:29.740 00:56:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:29.740 00:56:45 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:29.740 00:56:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.740 00:56:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.740 00:56:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.997 ************************************ 00:03:29.997 START TEST devices 00:03:29.997 ************************************ 00:03:29.997 00:56:45 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:29.997 * Looking for test storage... 00:03:29.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.997 00:56:45 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:29.997 00:56:45 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:29.997 00:56:45 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.997 00:56:45 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:31.371 00:56:47 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:31.371 No valid GPT data, bailing 00:03:31.371 00:56:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:31.371 00:56:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:31.371 00:56:47 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:31.371 00:56:47 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.371 00:56:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:31.371 ************************************ 00:03:31.371 START TEST nvme_mount 00:03:31.371 ************************************ 00:03:31.371 00:56:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:31.371 00:56:47 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:31.371 00:56:47 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:31.371 00:56:47 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.372 00:56:47 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:32.753 Creating new GPT entries in memory. 00:03:32.753 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.753 other utilities. 00:03:32.753 00:56:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.753 00:56:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.753 00:56:48 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.753 00:56:48 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.753 00:56:48 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.723 Creating new GPT entries in memory. 00:03:33.723 The operation has completed successfully. 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4016146 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:33.723 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.724 00:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:34.661 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:34.919 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:34.919 00:56:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:35.176 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:35.176 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:35.177 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:35.177 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.177 00:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.596 00:56:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.969 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:37.970 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:37.970 00:03:37.970 real 0m6.556s 00:03:37.970 user 0m1.546s 00:03:37.970 sys 0m2.558s 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.970 00:56:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:37.970 ************************************ 00:03:37.970 END TEST nvme_mount 00:03:37.970 ************************************ 00:03:37.970 00:56:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:37.970 00:56:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:37.970 00:56:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.970 00:56:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.970 00:56:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:37.970 ************************************ 00:03:37.970 START TEST dm_mount 00:03:37.970 ************************************ 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:37.970 00:56:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:39.344 Creating new GPT entries in memory. 00:03:39.344 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:39.344 other utilities. 00:03:39.344 00:56:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:39.344 00:56:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.344 00:56:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.344 00:56:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.344 00:56:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:40.279 Creating new GPT entries in memory. 00:03:40.279 The operation has completed successfully. 00:03:40.279 00:56:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.279 00:56:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.279 00:56:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:40.279 00:56:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:40.279 00:56:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:41.211 The operation has completed successfully. 00:03:41.211 00:56:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:41.211 00:56:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.211 00:56:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4018541 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.211 00:56:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.149 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.408 00:56:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.783 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:43.784 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:43.784 00:03:43.784 real 0m5.809s 00:03:43.784 user 0m0.988s 00:03:43.784 sys 0m1.673s 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.784 00:56:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:43.784 ************************************ 00:03:43.784 END TEST dm_mount 00:03:43.784 ************************************ 00:03:43.784 00:56:59 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:43.784 00:56:59 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:43.784 00:56:59 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:43.784 00:56:59 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.784 00:56:59 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.784 00:56:59 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.784 00:56:59 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.784 00:56:59 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.043 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:44.043 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:44.043 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:44.043 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:44.043 00:57:00 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:44.043 00:57:00 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.043 00:57:00 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:44.043 00:57:00 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.043 00:57:00 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:44.043 00:57:00 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.043 00:57:00 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:44.301 00:03:44.301 real 0m14.302s 00:03:44.301 user 0m3.190s 00:03:44.301 sys 0m5.279s 00:03:44.301 00:57:00 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.301 00:57:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.301 ************************************ 00:03:44.301 END TEST devices 00:03:44.301 ************************************ 00:03:44.301 00:57:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:44.301 00:03:44.301 real 0m45.110s 00:03:44.301 user 0m12.917s 00:03:44.301 sys 0m20.232s 00:03:44.301 00:57:00 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.301 00:57:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.301 ************************************ 00:03:44.301 END TEST setup.sh 00:03:44.301 ************************************ 00:03:44.301 00:57:00 -- common/autotest_common.sh@1142 -- # return 0 00:03:44.301 00:57:00 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.675 Hugepages 00:03:45.675 node hugesize free / total 00:03:45.675 node0 1048576kB 0 / 0 00:03:45.675 node0 2048kB 2048 / 2048 00:03:45.675 node1 1048576kB 0 / 0 00:03:45.675 node1 2048kB 0 / 0 00:03:45.675 00:03:45.675 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.675 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:45.675 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:45.675 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:45.675 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:45.675 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:45.675 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:45.675 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:45.675 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:45.675 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:45.675 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:45.675 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:45.675 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:45.675 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:45.675 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:45.675 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:45.675 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:45.675 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:45.675 00:57:01 -- spdk/autotest.sh@130 -- # uname -s 00:03:45.676 00:57:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:45.676 00:57:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:45.676 00:57:01 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.611 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.611 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.611 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.611 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.611 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.611 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.611 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.611 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.870 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.806 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.806 00:57:03 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:48.744 00:57:04 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:48.744 00:57:04 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:48.744 00:57:04 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:49.002 00:57:04 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:49.002 00:57:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:49.002 00:57:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:49.002 00:57:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.002 00:57:04 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:49.002 00:57:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:49.002 00:57:04 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:49.002 00:57:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:49.002 00:57:04 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.936 Waiting for block devices as requested 00:03:49.936 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:50.195 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:50.195 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:50.195 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:50.454 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:50.454 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:50.454 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:50.454 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:50.714 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:50.714 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:50.973 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:50.973 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:50.973 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:50.973 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:51.232 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:51.232 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:51.232 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:51.490 00:57:07 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:51.490 00:57:07 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:03:51.490 00:57:07 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:51.490 00:57:07 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:51.490 00:57:07 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:51.490 00:57:07 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:51.490 00:57:07 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:51.490 00:57:07 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:51.490 00:57:07 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:51.490 00:57:07 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:51.490 00:57:07 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:51.490 00:57:07 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:51.490 00:57:07 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:51.491 00:57:07 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:51.491 00:57:07 -- common/autotest_common.sh@1557 -- # continue 00:03:51.491 00:57:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:51.491 00:57:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:51.491 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:03:51.491 00:57:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:51.491 00:57:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.491 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:03:51.491 00:57:07 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.865 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.865 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.865 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.865 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.865 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.865 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.865 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.865 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.865 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.801 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.801 00:57:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:53.801 00:57:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.801 00:57:09 -- common/autotest_common.sh@10 -- # set +x 00:03:54.059 00:57:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:54.059 00:57:09 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:54.059 00:57:09 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:54.059 00:57:09 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:54.059 00:57:09 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:54.059 00:57:09 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:54.059 00:57:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:54.059 00:57:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:54.059 00:57:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.059 00:57:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.059 00:57:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:54.059 00:57:09 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:54.059 00:57:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:54.059 00:57:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:54.059 00:57:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:54.059 00:57:09 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:54.059 00:57:09 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:54.059 00:57:09 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:54.059 00:57:09 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:03:54.059 00:57:09 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:03:54.059 00:57:09 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4024458 00:03:54.059 00:57:09 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.059 00:57:09 -- common/autotest_common.sh@1598 -- # waitforlisten 4024458 00:03:54.059 00:57:09 -- common/autotest_common.sh@829 -- # '[' -z 4024458 ']' 00:03:54.059 00:57:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.059 00:57:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:54.059 00:57:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.059 00:57:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:54.059 00:57:09 -- common/autotest_common.sh@10 -- # set +x 00:03:54.059 [2024-07-16 00:57:09.923932] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:03:54.059 [2024-07-16 00:57:09.924037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024458 ] 00:03:54.059 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.059 [2024-07-16 00:57:09.980947] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.321 [2024-07-16 00:57:10.091548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.578 00:57:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:54.578 00:57:10 -- common/autotest_common.sh@862 -- # return 0 00:03:54.578 00:57:10 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:54.578 00:57:10 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:54.578 00:57:10 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:57.862 nvme0n1 00:03:57.862 00:57:13 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:57.862 [2024-07-16 00:57:13.624675] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:57.862 [2024-07-16 00:57:13.624719] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:57.862 request: 00:03:57.862 { 00:03:57.862 "nvme_ctrlr_name": "nvme0", 00:03:57.862 "password": "test", 00:03:57.862 "method": "bdev_nvme_opal_revert", 00:03:57.862 "req_id": 1 00:03:57.862 } 00:03:57.862 Got JSON-RPC error response 00:03:57.862 response: 00:03:57.862 { 00:03:57.862 "code": -32603, 00:03:57.862 "message": "Internal error" 00:03:57.862 } 00:03:57.862 00:57:13 -- common/autotest_common.sh@1604 -- # true 00:03:57.862 00:57:13 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:57.862 00:57:13 -- common/autotest_common.sh@1608 -- # killprocess 4024458 00:03:57.862 00:57:13 -- common/autotest_common.sh@948 -- # '[' -z 4024458 ']' 00:03:57.862 00:57:13 -- common/autotest_common.sh@952 -- # kill -0 4024458 00:03:57.862 00:57:13 -- common/autotest_common.sh@953 -- # uname 00:03:57.862 00:57:13 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:57.862 00:57:13 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4024458 00:03:57.862 00:57:13 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:57.862 00:57:13 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:57.862 00:57:13 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4024458' 00:03:57.862 killing process with pid 4024458 00:03:57.862 00:57:13 -- common/autotest_common.sh@967 -- # kill 4024458 00:03:57.862 00:57:13 -- common/autotest_common.sh@972 -- # wait 4024458 00:03:59.760 00:57:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:59.760 00:57:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:59.760 00:57:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:59.760 00:57:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:59.760 00:57:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:59.760 00:57:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.760 00:57:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.760 00:57:15 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:59.760 00:57:15 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.760 00:57:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.760 00:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.760 00:57:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.760 ************************************ 00:03:59.760 START TEST env 00:03:59.760 ************************************ 00:03:59.760 00:57:15 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.760 * Looking for test storage... 00:03:59.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:59.760 00:57:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.760 00:57:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.760 00:57:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.760 00:57:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.760 ************************************ 00:03:59.760 START TEST env_memory 00:03:59.760 ************************************ 00:03:59.760 00:57:15 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.760 00:03:59.760 00:03:59.760 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.760 http://cunit.sourceforge.net/ 00:03:59.760 00:03:59.760 00:03:59.760 Suite: memory 00:03:59.760 Test: alloc and free memory map ...[2024-07-16 00:57:15.530747] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:59.760 passed 00:03:59.760 Test: mem map translation ...[2024-07-16 00:57:15.550909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:59.760 [2024-07-16 00:57:15.550931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:59.760 [2024-07-16 00:57:15.550997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:59.760 [2024-07-16 00:57:15.551010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:59.760 passed 00:03:59.760 Test: mem map registration ...[2024-07-16 00:57:15.591545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:59.760 [2024-07-16 00:57:15.591565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:59.760 passed 00:03:59.760 Test: mem map adjacent registrations ...passed 00:03:59.760 00:03:59.760 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.760 suites 1 1 n/a 0 0 00:03:59.760 tests 4 4 4 0 0 00:03:59.760 asserts 152 152 152 0 n/a 00:03:59.760 00:03:59.760 Elapsed time = 0.140 seconds 00:03:59.760 00:03:59.760 real 0m0.149s 00:03:59.760 user 0m0.144s 00:03:59.760 sys 0m0.004s 00:03:59.760 00:57:15 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.760 00:57:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:59.760 ************************************ 00:03:59.760 END TEST env_memory 00:03:59.760 ************************************ 00:03:59.760 00:57:15 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.760 00:57:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.760 00:57:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.760 00:57:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.760 00:57:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.760 ************************************ 00:03:59.760 START TEST env_vtophys 00:03:59.760 ************************************ 00:03:59.760 00:57:15 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.760 EAL: lib.eal log level changed from notice to debug 00:03:59.760 EAL: Detected lcore 0 as core 0 on socket 0 00:03:59.760 EAL: Detected lcore 1 as core 1 on socket 0 00:03:59.760 EAL: Detected lcore 2 as core 2 on socket 0 00:03:59.760 EAL: Detected lcore 3 as core 3 on socket 0 00:03:59.760 EAL: Detected lcore 4 as core 4 on socket 0 00:03:59.760 EAL: Detected lcore 5 as core 5 on socket 0 00:03:59.760 EAL: Detected lcore 6 as core 8 on socket 0 00:03:59.760 EAL: Detected lcore 7 as core 9 on socket 0 00:03:59.760 EAL: Detected lcore 8 as core 10 on socket 0 00:03:59.760 EAL: Detected lcore 9 as core 11 on socket 0 00:03:59.760 EAL: Detected lcore 10 as core 12 on socket 0 00:03:59.760 EAL: Detected lcore 11 as core 13 on socket 0 00:03:59.760 EAL: Detected lcore 12 as core 0 on socket 1 00:03:59.760 EAL: Detected lcore 13 as core 1 on socket 1 00:03:59.760 EAL: Detected lcore 14 as core 2 on socket 1 00:03:59.760 EAL: Detected lcore 15 as core 3 on socket 1 00:03:59.760 EAL: Detected lcore 16 as core 4 on socket 1 00:03:59.760 EAL: Detected lcore 17 as core 5 on socket 1 00:03:59.760 EAL: Detected lcore 18 as core 8 on socket 1 00:03:59.760 EAL: Detected lcore 19 as core 9 on socket 1 00:03:59.760 EAL: Detected lcore 20 as core 10 on socket 1 00:03:59.760 EAL: Detected lcore 21 as core 11 on socket 1 00:03:59.760 EAL: Detected lcore 22 as core 12 on socket 1 00:03:59.760 EAL: Detected lcore 23 as core 13 on socket 1 00:03:59.760 EAL: Detected lcore 24 as core 0 on socket 0 00:03:59.760 EAL: Detected lcore 25 as core 1 on socket 0 00:03:59.760 EAL: Detected lcore 26 as core 2 on socket 0 00:03:59.760 EAL: Detected lcore 27 as core 3 on socket 0 00:03:59.760 EAL: Detected lcore 28 as core 4 on socket 0 00:03:59.760 EAL: Detected lcore 29 as core 5 on socket 0 00:03:59.760 EAL: Detected lcore 30 as core 8 on socket 0 00:03:59.760 EAL: Detected lcore 31 as core 9 on socket 0 00:03:59.760 EAL: Detected lcore 32 as core 10 on socket 0 00:03:59.760 EAL: Detected lcore 33 as core 11 on socket 0 00:03:59.760 EAL: Detected lcore 34 as core 12 on socket 0 00:03:59.760 EAL: Detected lcore 35 as core 13 on socket 0 00:03:59.760 EAL: Detected lcore 36 as core 0 on socket 1 00:03:59.760 EAL: Detected lcore 37 as core 1 on socket 1 00:03:59.760 EAL: Detected lcore 38 as core 2 on socket 1 00:03:59.760 EAL: Detected lcore 39 as core 3 on socket 1 00:03:59.760 EAL: Detected lcore 40 as core 4 on socket 1 00:03:59.760 EAL: Detected lcore 41 as core 5 on socket 1 00:03:59.760 EAL: Detected lcore 42 as core 8 on socket 1 00:03:59.760 EAL: Detected lcore 43 as core 9 on socket 1 00:03:59.760 EAL: Detected lcore 44 as core 10 on socket 1 00:03:59.760 EAL: Detected lcore 45 as core 11 on socket 1 00:03:59.760 EAL: Detected lcore 46 as core 12 on socket 1 00:03:59.760 EAL: Detected lcore 47 as core 13 on socket 1 00:03:59.760 EAL: Maximum logical cores by configuration: 128 00:03:59.760 EAL: Detected CPU lcores: 48 00:03:59.760 EAL: Detected NUMA nodes: 2 00:03:59.760 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:59.760 EAL: Detected shared linkage of DPDK 00:03:59.760 EAL: No shared files mode enabled, IPC will be disabled 00:03:59.760 EAL: Bus pci wants IOVA as 'DC' 00:03:59.760 EAL: Buses did not request a specific IOVA mode. 00:03:59.760 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:59.760 EAL: Selected IOVA mode 'VA' 00:03:59.760 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.760 EAL: Probing VFIO support... 00:03:59.760 EAL: IOMMU type 1 (Type 1) is supported 00:03:59.760 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:59.760 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:59.760 EAL: VFIO support initialized 00:03:59.760 EAL: Ask a virtual area of 0x2e000 bytes 00:03:59.760 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:59.760 EAL: Setting up physically contiguous memory... 00:03:59.760 EAL: Setting maximum number of open files to 524288 00:03:59.760 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:59.760 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:59.761 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:59.761 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:59.761 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.761 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:59.761 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.761 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.761 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:59.761 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:59.761 EAL: Hugepages will be freed exactly as allocated. 00:03:59.761 EAL: No shared files mode enabled, IPC is disabled 00:03:59.761 EAL: No shared files mode enabled, IPC is disabled 00:03:59.761 EAL: TSC frequency is ~2700000 KHz 00:03:59.761 EAL: Main lcore 0 is ready (tid=7f0d5b680a00;cpuset=[0]) 00:03:59.761 EAL: Trying to obtain current memory policy. 00:03:59.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.761 EAL: Restoring previous memory policy: 0 00:03:59.761 EAL: request: mp_malloc_sync 00:03:59.761 EAL: No shared files mode enabled, IPC is disabled 00:03:59.761 EAL: Heap on socket 0 was expanded by 2MB 00:03:59.761 EAL: No shared files mode enabled, IPC is disabled 00:04:00.018 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.018 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.018 00:04:00.018 00:04:00.018 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.018 http://cunit.sourceforge.net/ 00:04:00.018 00:04:00.018 00:04:00.018 Suite: components_suite 00:04:00.018 Test: vtophys_malloc_test ...passed 00:04:00.018 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.018 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.018 EAL: Restoring previous memory policy: 4 00:04:00.018 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.018 EAL: request: mp_malloc_sync 00:04:00.018 EAL: No shared files mode enabled, IPC is disabled 00:04:00.018 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.018 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.018 EAL: request: mp_malloc_sync 00:04:00.018 EAL: No shared files mode enabled, IPC is disabled 00:04:00.018 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.018 EAL: Trying to obtain current memory policy. 00:04:00.018 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.018 EAL: Restoring previous memory policy: 4 00:04:00.018 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.018 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.019 EAL: Trying to obtain current memory policy. 00:04:00.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.019 EAL: Restoring previous memory policy: 4 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.019 EAL: Trying to obtain current memory policy. 00:04:00.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.019 EAL: Restoring previous memory policy: 4 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.019 EAL: Trying to obtain current memory policy. 00:04:00.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.019 EAL: Restoring previous memory policy: 4 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.019 EAL: Trying to obtain current memory policy. 00:04:00.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.019 EAL: Restoring previous memory policy: 4 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.019 EAL: Trying to obtain current memory policy. 00:04:00.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.019 EAL: Restoring previous memory policy: 4 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.019 EAL: Trying to obtain current memory policy. 00:04:00.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.019 EAL: Restoring previous memory policy: 4 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.019 EAL: request: mp_malloc_sync 00:04:00.019 EAL: No shared files mode enabled, IPC is disabled 00:04:00.019 EAL: Heap on socket 0 was expanded by 258MB 00:04:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.276 EAL: request: mp_malloc_sync 00:04:00.276 EAL: No shared files mode enabled, IPC is disabled 00:04:00.276 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.276 EAL: Trying to obtain current memory policy. 00:04:00.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.276 EAL: Restoring previous memory policy: 4 00:04:00.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.276 EAL: request: mp_malloc_sync 00:04:00.276 EAL: No shared files mode enabled, IPC is disabled 00:04:00.276 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.534 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.534 EAL: request: mp_malloc_sync 00:04:00.534 EAL: No shared files mode enabled, IPC is disabled 00:04:00.534 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.534 EAL: Trying to obtain current memory policy. 00:04:00.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.792 EAL: Restoring previous memory policy: 4 00:04:00.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.792 EAL: request: mp_malloc_sync 00:04:00.792 EAL: No shared files mode enabled, IPC is disabled 00:04:00.792 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.049 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.307 EAL: request: mp_malloc_sync 00:04:01.307 EAL: No shared files mode enabled, IPC is disabled 00:04:01.307 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.307 passed 00:04:01.307 00:04:01.307 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.307 suites 1 1 n/a 0 0 00:04:01.307 tests 2 2 2 0 0 00:04:01.307 asserts 497 497 497 0 n/a 00:04:01.307 00:04:01.307 Elapsed time = 1.297 seconds 00:04:01.307 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.307 EAL: request: mp_malloc_sync 00:04:01.307 EAL: No shared files mode enabled, IPC is disabled 00:04:01.307 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.307 EAL: No shared files mode enabled, IPC is disabled 00:04:01.307 EAL: No shared files mode enabled, IPC is disabled 00:04:01.307 EAL: No shared files mode enabled, IPC is disabled 00:04:01.307 00:04:01.307 real 0m1.420s 00:04:01.307 user 0m0.825s 00:04:01.307 sys 0m0.554s 00:04:01.307 00:57:17 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.307 00:57:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.307 ************************************ 00:04:01.307 END TEST env_vtophys 00:04:01.307 ************************************ 00:04:01.307 00:57:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:01.307 00:57:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.307 00:57:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.307 00:57:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.307 00:57:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.307 ************************************ 00:04:01.307 START TEST env_pci 00:04:01.307 ************************************ 00:04:01.307 00:57:17 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.307 00:04:01.307 00:04:01.307 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.307 http://cunit.sourceforge.net/ 00:04:01.307 00:04:01.307 00:04:01.307 Suite: pci 00:04:01.308 Test: pci_hook ...[2024-07-16 00:57:17.175358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4025347 has claimed it 00:04:01.308 EAL: Cannot find device (10000:00:01.0) 00:04:01.308 EAL: Failed to attach device on primary process 00:04:01.308 passed 00:04:01.308 00:04:01.308 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.308 suites 1 1 n/a 0 0 00:04:01.308 tests 1 1 1 0 0 00:04:01.308 asserts 25 25 25 0 n/a 00:04:01.308 00:04:01.308 Elapsed time = 0.022 seconds 00:04:01.308 00:04:01.308 real 0m0.036s 00:04:01.308 user 0m0.012s 00:04:01.308 sys 0m0.023s 00:04:01.308 00:57:17 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.308 00:57:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.308 ************************************ 00:04:01.308 END TEST env_pci 00:04:01.308 ************************************ 00:04:01.308 00:57:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:01.308 00:57:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.308 00:57:17 env -- env/env.sh@15 -- # uname 00:04:01.308 00:57:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.308 00:57:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.308 00:57:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.308 00:57:17 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:01.308 00:57:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.308 00:57:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.308 ************************************ 00:04:01.308 START TEST env_dpdk_post_init 00:04:01.308 ************************************ 00:04:01.308 00:57:17 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.308 EAL: Detected CPU lcores: 48 00:04:01.308 EAL: Detected NUMA nodes: 2 00:04:01.308 EAL: Detected shared linkage of DPDK 00:04:01.308 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.308 EAL: Selected IOVA mode 'VA' 00:04:01.308 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.308 EAL: VFIO support initialized 00:04:01.308 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.566 EAL: Using IOMMU type 1 (Type 1) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:02.499 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:02.499 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:05.775 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:05.775 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:05.775 Starting DPDK initialization... 00:04:05.775 Starting SPDK post initialization... 00:04:05.775 SPDK NVMe probe 00:04:05.775 Attaching to 0000:0b:00.0 00:04:05.775 Attached to 0000:0b:00.0 00:04:05.775 Cleaning up... 00:04:05.775 00:04:05.775 real 0m4.329s 00:04:05.775 user 0m3.205s 00:04:05.775 sys 0m0.181s 00:04:05.775 00:57:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.775 00:57:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.775 ************************************ 00:04:05.775 END TEST env_dpdk_post_init 00:04:05.775 ************************************ 00:04:05.775 00:57:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.775 00:57:21 env -- env/env.sh@26 -- # uname 00:04:05.775 00:57:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:05.775 00:57:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.775 00:57:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.775 00:57:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.775 00:57:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.775 ************************************ 00:04:05.775 START TEST env_mem_callbacks 00:04:05.775 ************************************ 00:04:05.775 00:57:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.775 EAL: Detected CPU lcores: 48 00:04:05.775 EAL: Detected NUMA nodes: 2 00:04:05.775 EAL: Detected shared linkage of DPDK 00:04:05.775 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.775 EAL: Selected IOVA mode 'VA' 00:04:05.775 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.775 EAL: VFIO support initialized 00:04:05.775 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.775 00:04:05.775 00:04:05.775 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.775 http://cunit.sourceforge.net/ 00:04:05.775 00:04:05.775 00:04:05.775 Suite: memory 00:04:05.775 Test: test ... 00:04:05.775 register 0x200000200000 2097152 00:04:05.775 malloc 3145728 00:04:05.775 register 0x200000400000 4194304 00:04:05.775 buf 0x200000500000 len 3145728 PASSED 00:04:05.775 malloc 64 00:04:05.775 buf 0x2000004fff40 len 64 PASSED 00:04:05.775 malloc 4194304 00:04:05.775 register 0x200000800000 6291456 00:04:05.775 buf 0x200000a00000 len 4194304 PASSED 00:04:05.775 free 0x200000500000 3145728 00:04:05.775 free 0x2000004fff40 64 00:04:05.775 unregister 0x200000400000 4194304 PASSED 00:04:05.775 free 0x200000a00000 4194304 00:04:05.775 unregister 0x200000800000 6291456 PASSED 00:04:05.775 malloc 8388608 00:04:05.775 register 0x200000400000 10485760 00:04:05.775 buf 0x200000600000 len 8388608 PASSED 00:04:05.775 free 0x200000600000 8388608 00:04:05.775 unregister 0x200000400000 10485760 PASSED 00:04:05.775 passed 00:04:05.775 00:04:05.775 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.775 suites 1 1 n/a 0 0 00:04:05.775 tests 1 1 1 0 0 00:04:05.775 asserts 15 15 15 0 n/a 00:04:05.775 00:04:05.775 Elapsed time = 0.005 seconds 00:04:05.775 00:04:05.775 real 0m0.047s 00:04:05.775 user 0m0.016s 00:04:05.775 sys 0m0.031s 00:04:05.775 00:57:21 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.775 00:57:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.775 ************************************ 00:04:05.775 END TEST env_mem_callbacks 00:04:05.775 ************************************ 00:04:05.775 00:57:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.775 00:04:05.775 real 0m6.281s 00:04:05.775 user 0m4.338s 00:04:05.775 sys 0m0.976s 00:04:05.775 00:57:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.775 00:57:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.775 ************************************ 00:04:05.775 END TEST env 00:04:05.775 ************************************ 00:04:05.775 00:57:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:05.775 00:57:21 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:05.775 00:57:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.775 00:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.775 00:57:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.775 ************************************ 00:04:05.775 START TEST rpc 00:04:05.775 ************************************ 00:04:05.775 00:57:21 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.063 * Looking for test storage... 00:04:06.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.063 00:57:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4026002 00:04:06.063 00:57:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:06.063 00:57:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.063 00:57:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4026002 00:04:06.063 00:57:21 rpc -- common/autotest_common.sh@829 -- # '[' -z 4026002 ']' 00:04:06.063 00:57:21 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.063 00:57:21 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:06.063 00:57:21 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.063 00:57:21 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:06.063 00:57:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.063 [2024-07-16 00:57:21.858137] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:06.063 [2024-07-16 00:57:21.858222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026002 ] 00:04:06.063 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.063 [2024-07-16 00:57:21.915073] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.063 [2024-07-16 00:57:22.020418] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.063 [2024-07-16 00:57:22.020481] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4026002' to capture a snapshot of events at runtime. 00:04:06.063 [2024-07-16 00:57:22.020494] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.063 [2024-07-16 00:57:22.020505] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.063 [2024-07-16 00:57:22.020514] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4026002 for offline analysis/debug. 00:04:06.063 [2024-07-16 00:57:22.020545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.348 00:57:22 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.348 00:57:22 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:06.348 00:57:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.348 00:57:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.348 00:57:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.348 00:57:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.348 00:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.348 00:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.348 00:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.348 ************************************ 00:04:06.348 START TEST rpc_integrity 00:04:06.348 ************************************ 00:04:06.348 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:06.348 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.348 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.348 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.348 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.348 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.348 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.611 { 00:04:06.611 "name": "Malloc0", 00:04:06.611 "aliases": [ 00:04:06.611 "01de4fa4-8024-4be2-8f7b-cf20d96ad92f" 00:04:06.611 ], 00:04:06.611 "product_name": "Malloc disk", 00:04:06.611 "block_size": 512, 00:04:06.611 "num_blocks": 16384, 00:04:06.611 "uuid": "01de4fa4-8024-4be2-8f7b-cf20d96ad92f", 00:04:06.611 "assigned_rate_limits": { 00:04:06.611 "rw_ios_per_sec": 0, 00:04:06.611 "rw_mbytes_per_sec": 0, 00:04:06.611 "r_mbytes_per_sec": 0, 00:04:06.611 "w_mbytes_per_sec": 0 00:04:06.611 }, 00:04:06.611 "claimed": false, 00:04:06.611 "zoned": false, 00:04:06.611 "supported_io_types": { 00:04:06.611 "read": true, 00:04:06.611 "write": true, 00:04:06.611 "unmap": true, 00:04:06.611 "flush": true, 00:04:06.611 "reset": true, 00:04:06.611 "nvme_admin": false, 00:04:06.611 "nvme_io": false, 00:04:06.611 "nvme_io_md": false, 00:04:06.611 "write_zeroes": true, 00:04:06.611 "zcopy": true, 00:04:06.611 "get_zone_info": false, 00:04:06.611 "zone_management": false, 00:04:06.611 "zone_append": false, 00:04:06.611 "compare": false, 00:04:06.611 "compare_and_write": false, 00:04:06.611 "abort": true, 00:04:06.611 "seek_hole": false, 00:04:06.611 "seek_data": false, 00:04:06.611 "copy": true, 00:04:06.611 "nvme_iov_md": false 00:04:06.611 }, 00:04:06.611 "memory_domains": [ 00:04:06.611 { 00:04:06.611 "dma_device_id": "system", 00:04:06.611 "dma_device_type": 1 00:04:06.611 }, 00:04:06.611 { 00:04:06.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.611 "dma_device_type": 2 00:04:06.611 } 00:04:06.611 ], 00:04:06.611 "driver_specific": {} 00:04:06.611 } 00:04:06.611 ]' 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.611 [2024-07-16 00:57:22.394170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.611 [2024-07-16 00:57:22.394210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.611 [2024-07-16 00:57:22.394253] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5c4eb0 00:04:06.611 [2024-07-16 00:57:22.394267] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.611 [2024-07-16 00:57:22.395521] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.611 [2024-07-16 00:57:22.395544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.611 Passthru0 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.611 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.611 { 00:04:06.611 "name": "Malloc0", 00:04:06.611 "aliases": [ 00:04:06.611 "01de4fa4-8024-4be2-8f7b-cf20d96ad92f" 00:04:06.611 ], 00:04:06.611 "product_name": "Malloc disk", 00:04:06.611 "block_size": 512, 00:04:06.611 "num_blocks": 16384, 00:04:06.611 "uuid": "01de4fa4-8024-4be2-8f7b-cf20d96ad92f", 00:04:06.611 "assigned_rate_limits": { 00:04:06.611 "rw_ios_per_sec": 0, 00:04:06.611 "rw_mbytes_per_sec": 0, 00:04:06.611 "r_mbytes_per_sec": 0, 00:04:06.611 "w_mbytes_per_sec": 0 00:04:06.611 }, 00:04:06.611 "claimed": true, 00:04:06.611 "claim_type": "exclusive_write", 00:04:06.611 "zoned": false, 00:04:06.611 "supported_io_types": { 00:04:06.611 "read": true, 00:04:06.611 "write": true, 00:04:06.611 "unmap": true, 00:04:06.611 "flush": true, 00:04:06.611 "reset": true, 00:04:06.611 "nvme_admin": false, 00:04:06.611 "nvme_io": false, 00:04:06.611 "nvme_io_md": false, 00:04:06.611 "write_zeroes": true, 00:04:06.611 "zcopy": true, 00:04:06.611 "get_zone_info": false, 00:04:06.611 "zone_management": false, 00:04:06.611 "zone_append": false, 00:04:06.611 "compare": false, 00:04:06.611 "compare_and_write": false, 00:04:06.611 "abort": true, 00:04:06.611 "seek_hole": false, 00:04:06.611 "seek_data": false, 00:04:06.611 "copy": true, 00:04:06.611 "nvme_iov_md": false 00:04:06.611 }, 00:04:06.611 "memory_domains": [ 00:04:06.611 { 00:04:06.611 "dma_device_id": "system", 00:04:06.611 "dma_device_type": 1 00:04:06.611 }, 00:04:06.611 { 00:04:06.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.611 "dma_device_type": 2 00:04:06.611 } 00:04:06.611 ], 00:04:06.611 "driver_specific": {} 00:04:06.611 }, 00:04:06.611 { 00:04:06.611 "name": "Passthru0", 00:04:06.611 "aliases": [ 00:04:06.611 "78046fbb-665c-544f-a7d9-231e1f5cc4e0" 00:04:06.611 ], 00:04:06.611 "product_name": "passthru", 00:04:06.611 "block_size": 512, 00:04:06.611 "num_blocks": 16384, 00:04:06.611 "uuid": "78046fbb-665c-544f-a7d9-231e1f5cc4e0", 00:04:06.611 "assigned_rate_limits": { 00:04:06.611 "rw_ios_per_sec": 0, 00:04:06.611 "rw_mbytes_per_sec": 0, 00:04:06.611 "r_mbytes_per_sec": 0, 00:04:06.611 "w_mbytes_per_sec": 0 00:04:06.611 }, 00:04:06.611 "claimed": false, 00:04:06.611 "zoned": false, 00:04:06.611 "supported_io_types": { 00:04:06.611 "read": true, 00:04:06.611 "write": true, 00:04:06.611 "unmap": true, 00:04:06.611 "flush": true, 00:04:06.611 "reset": true, 00:04:06.611 "nvme_admin": false, 00:04:06.611 "nvme_io": false, 00:04:06.611 "nvme_io_md": false, 00:04:06.611 "write_zeroes": true, 00:04:06.611 "zcopy": true, 00:04:06.611 "get_zone_info": false, 00:04:06.611 "zone_management": false, 00:04:06.611 "zone_append": false, 00:04:06.611 "compare": false, 00:04:06.611 "compare_and_write": false, 00:04:06.611 "abort": true, 00:04:06.611 "seek_hole": false, 00:04:06.611 "seek_data": false, 00:04:06.611 "copy": true, 00:04:06.611 "nvme_iov_md": false 00:04:06.611 }, 00:04:06.611 "memory_domains": [ 00:04:06.611 { 00:04:06.611 "dma_device_id": "system", 00:04:06.611 "dma_device_type": 1 00:04:06.611 }, 00:04:06.611 { 00:04:06.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.611 "dma_device_type": 2 00:04:06.611 } 00:04:06.611 ], 00:04:06.611 "driver_specific": { 00:04:06.611 "passthru": { 00:04:06.611 "name": "Passthru0", 00:04:06.611 "base_bdev_name": "Malloc0" 00:04:06.611 } 00:04:06.611 } 00:04:06.611 } 00:04:06.611 ]' 00:04:06.611 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.612 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.612 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.612 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.612 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.612 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.612 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.612 00:57:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.612 00:04:06.612 real 0m0.220s 00:04:06.612 user 0m0.144s 00:04:06.612 sys 0m0.024s 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.612 00:57:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.612 ************************************ 00:04:06.612 END TEST rpc_integrity 00:04:06.612 ************************************ 00:04:06.612 00:57:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:06.612 00:57:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:06.612 00:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.612 00:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.612 00:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.612 ************************************ 00:04:06.612 START TEST rpc_plugins 00:04:06.612 ************************************ 00:04:06.612 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:06.612 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:06.612 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.612 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.612 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.612 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:06.612 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:06.612 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.612 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.612 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.612 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:06.612 { 00:04:06.612 "name": "Malloc1", 00:04:06.612 "aliases": [ 00:04:06.612 "805991be-a456-4a23-b0ba-a173d8b8bcd0" 00:04:06.612 ], 00:04:06.612 "product_name": "Malloc disk", 00:04:06.612 "block_size": 4096, 00:04:06.612 "num_blocks": 256, 00:04:06.612 "uuid": "805991be-a456-4a23-b0ba-a173d8b8bcd0", 00:04:06.612 "assigned_rate_limits": { 00:04:06.612 "rw_ios_per_sec": 0, 00:04:06.612 "rw_mbytes_per_sec": 0, 00:04:06.612 "r_mbytes_per_sec": 0, 00:04:06.612 "w_mbytes_per_sec": 0 00:04:06.612 }, 00:04:06.612 "claimed": false, 00:04:06.612 "zoned": false, 00:04:06.612 "supported_io_types": { 00:04:06.612 "read": true, 00:04:06.612 "write": true, 00:04:06.612 "unmap": true, 00:04:06.612 "flush": true, 00:04:06.612 "reset": true, 00:04:06.612 "nvme_admin": false, 00:04:06.612 "nvme_io": false, 00:04:06.612 "nvme_io_md": false, 00:04:06.612 "write_zeroes": true, 00:04:06.612 "zcopy": true, 00:04:06.612 "get_zone_info": false, 00:04:06.612 "zone_management": false, 00:04:06.612 "zone_append": false, 00:04:06.612 "compare": false, 00:04:06.612 "compare_and_write": false, 00:04:06.612 "abort": true, 00:04:06.612 "seek_hole": false, 00:04:06.612 "seek_data": false, 00:04:06.612 "copy": true, 00:04:06.612 "nvme_iov_md": false 00:04:06.612 }, 00:04:06.612 "memory_domains": [ 00:04:06.612 { 00:04:06.612 "dma_device_id": "system", 00:04:06.612 "dma_device_type": 1 00:04:06.612 }, 00:04:06.612 { 00:04:06.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.612 "dma_device_type": 2 00:04:06.612 } 00:04:06.612 ], 00:04:06.612 "driver_specific": {} 00:04:06.612 } 00:04:06.612 ]' 00:04:06.612 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:06.870 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:06.870 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.870 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.870 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:06.870 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:06.870 00:57:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:06.870 00:04:06.870 real 0m0.104s 00:04:06.870 user 0m0.071s 00:04:06.870 sys 0m0.007s 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.870 00:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.870 ************************************ 00:04:06.870 END TEST rpc_plugins 00:04:06.870 ************************************ 00:04:06.870 00:57:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:06.870 00:57:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:06.870 00:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.870 00:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.870 00:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.870 ************************************ 00:04:06.870 START TEST rpc_trace_cmd_test 00:04:06.870 ************************************ 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:06.870 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4026002", 00:04:06.870 "tpoint_group_mask": "0x8", 00:04:06.870 "iscsi_conn": { 00:04:06.870 "mask": "0x2", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "scsi": { 00:04:06.870 "mask": "0x4", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "bdev": { 00:04:06.870 "mask": "0x8", 00:04:06.870 "tpoint_mask": "0xffffffffffffffff" 00:04:06.870 }, 00:04:06.870 "nvmf_rdma": { 00:04:06.870 "mask": "0x10", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "nvmf_tcp": { 00:04:06.870 "mask": "0x20", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "ftl": { 00:04:06.870 "mask": "0x40", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "blobfs": { 00:04:06.870 "mask": "0x80", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "dsa": { 00:04:06.870 "mask": "0x200", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "thread": { 00:04:06.870 "mask": "0x400", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "nvme_pcie": { 00:04:06.870 "mask": "0x800", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "iaa": { 00:04:06.870 "mask": "0x1000", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "nvme_tcp": { 00:04:06.870 "mask": "0x2000", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "bdev_nvme": { 00:04:06.870 "mask": "0x4000", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 }, 00:04:06.870 "sock": { 00:04:06.870 "mask": "0x8000", 00:04:06.870 "tpoint_mask": "0x0" 00:04:06.870 } 00:04:06.870 }' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:06.870 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.129 00:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.129 00:04:07.129 real 0m0.182s 00:04:07.129 user 0m0.160s 00:04:07.129 sys 0m0.013s 00:04:07.129 00:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.129 00:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.129 ************************************ 00:04:07.129 END TEST rpc_trace_cmd_test 00:04:07.129 ************************************ 00:04:07.129 00:57:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:07.129 00:57:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.129 00:57:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.129 00:57:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.129 00:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.129 00:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.129 00:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.129 ************************************ 00:04:07.129 START TEST rpc_daemon_integrity 00:04:07.129 ************************************ 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.129 00:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.129 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.129 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.129 { 00:04:07.129 "name": "Malloc2", 00:04:07.129 "aliases": [ 00:04:07.129 "9cb3decd-80ea-4a20-80c3-e6b6d0d264d2" 00:04:07.129 ], 00:04:07.129 "product_name": "Malloc disk", 00:04:07.129 "block_size": 512, 00:04:07.129 "num_blocks": 16384, 00:04:07.129 "uuid": "9cb3decd-80ea-4a20-80c3-e6b6d0d264d2", 00:04:07.129 "assigned_rate_limits": { 00:04:07.129 "rw_ios_per_sec": 0, 00:04:07.129 "rw_mbytes_per_sec": 0, 00:04:07.129 "r_mbytes_per_sec": 0, 00:04:07.129 "w_mbytes_per_sec": 0 00:04:07.129 }, 00:04:07.129 "claimed": false, 00:04:07.129 "zoned": false, 00:04:07.129 "supported_io_types": { 00:04:07.129 "read": true, 00:04:07.129 "write": true, 00:04:07.129 "unmap": true, 00:04:07.129 "flush": true, 00:04:07.129 "reset": true, 00:04:07.129 "nvme_admin": false, 00:04:07.129 "nvme_io": false, 00:04:07.129 "nvme_io_md": false, 00:04:07.129 "write_zeroes": true, 00:04:07.129 "zcopy": true, 00:04:07.129 "get_zone_info": false, 00:04:07.129 "zone_management": false, 00:04:07.129 "zone_append": false, 00:04:07.129 "compare": false, 00:04:07.129 "compare_and_write": false, 00:04:07.129 "abort": true, 00:04:07.129 "seek_hole": false, 00:04:07.129 "seek_data": false, 00:04:07.129 "copy": true, 00:04:07.129 "nvme_iov_md": false 00:04:07.129 }, 00:04:07.129 "memory_domains": [ 00:04:07.129 { 00:04:07.129 "dma_device_id": "system", 00:04:07.129 "dma_device_type": 1 00:04:07.129 }, 00:04:07.129 { 00:04:07.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.129 "dma_device_type": 2 00:04:07.129 } 00:04:07.129 ], 00:04:07.129 "driver_specific": {} 00:04:07.129 } 00:04:07.129 ]' 00:04:07.129 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.129 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.129 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.130 [2024-07-16 00:57:23.040136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.130 [2024-07-16 00:57:23.040174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.130 [2024-07-16 00:57:23.040197] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5be080 00:04:07.130 [2024-07-16 00:57:23.040211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.130 [2024-07-16 00:57:23.041374] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.130 [2024-07-16 00:57:23.041397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.130 Passthru0 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.130 { 00:04:07.130 "name": "Malloc2", 00:04:07.130 "aliases": [ 00:04:07.130 "9cb3decd-80ea-4a20-80c3-e6b6d0d264d2" 00:04:07.130 ], 00:04:07.130 "product_name": "Malloc disk", 00:04:07.130 "block_size": 512, 00:04:07.130 "num_blocks": 16384, 00:04:07.130 "uuid": "9cb3decd-80ea-4a20-80c3-e6b6d0d264d2", 00:04:07.130 "assigned_rate_limits": { 00:04:07.130 "rw_ios_per_sec": 0, 00:04:07.130 "rw_mbytes_per_sec": 0, 00:04:07.130 "r_mbytes_per_sec": 0, 00:04:07.130 "w_mbytes_per_sec": 0 00:04:07.130 }, 00:04:07.130 "claimed": true, 00:04:07.130 "claim_type": "exclusive_write", 00:04:07.130 "zoned": false, 00:04:07.130 "supported_io_types": { 00:04:07.130 "read": true, 00:04:07.130 "write": true, 00:04:07.130 "unmap": true, 00:04:07.130 "flush": true, 00:04:07.130 "reset": true, 00:04:07.130 "nvme_admin": false, 00:04:07.130 "nvme_io": false, 00:04:07.130 "nvme_io_md": false, 00:04:07.130 "write_zeroes": true, 00:04:07.130 "zcopy": true, 00:04:07.130 "get_zone_info": false, 00:04:07.130 "zone_management": false, 00:04:07.130 "zone_append": false, 00:04:07.130 "compare": false, 00:04:07.130 "compare_and_write": false, 00:04:07.130 "abort": true, 00:04:07.130 "seek_hole": false, 00:04:07.130 "seek_data": false, 00:04:07.130 "copy": true, 00:04:07.130 "nvme_iov_md": false 00:04:07.130 }, 00:04:07.130 "memory_domains": [ 00:04:07.130 { 00:04:07.130 "dma_device_id": "system", 00:04:07.130 "dma_device_type": 1 00:04:07.130 }, 00:04:07.130 { 00:04:07.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.130 "dma_device_type": 2 00:04:07.130 } 00:04:07.130 ], 00:04:07.130 "driver_specific": {} 00:04:07.130 }, 00:04:07.130 { 00:04:07.130 "name": "Passthru0", 00:04:07.130 "aliases": [ 00:04:07.130 "1335f299-c0aa-5dee-bcdf-62a8981dd85d" 00:04:07.130 ], 00:04:07.130 "product_name": "passthru", 00:04:07.130 "block_size": 512, 00:04:07.130 "num_blocks": 16384, 00:04:07.130 "uuid": "1335f299-c0aa-5dee-bcdf-62a8981dd85d", 00:04:07.130 "assigned_rate_limits": { 00:04:07.130 "rw_ios_per_sec": 0, 00:04:07.130 "rw_mbytes_per_sec": 0, 00:04:07.130 "r_mbytes_per_sec": 0, 00:04:07.130 "w_mbytes_per_sec": 0 00:04:07.130 }, 00:04:07.130 "claimed": false, 00:04:07.130 "zoned": false, 00:04:07.130 "supported_io_types": { 00:04:07.130 "read": true, 00:04:07.130 "write": true, 00:04:07.130 "unmap": true, 00:04:07.130 "flush": true, 00:04:07.130 "reset": true, 00:04:07.130 "nvme_admin": false, 00:04:07.130 "nvme_io": false, 00:04:07.130 "nvme_io_md": false, 00:04:07.130 "write_zeroes": true, 00:04:07.130 "zcopy": true, 00:04:07.130 "get_zone_info": false, 00:04:07.130 "zone_management": false, 00:04:07.130 "zone_append": false, 00:04:07.130 "compare": false, 00:04:07.130 "compare_and_write": false, 00:04:07.130 "abort": true, 00:04:07.130 "seek_hole": false, 00:04:07.130 "seek_data": false, 00:04:07.130 "copy": true, 00:04:07.130 "nvme_iov_md": false 00:04:07.130 }, 00:04:07.130 "memory_domains": [ 00:04:07.130 { 00:04:07.130 "dma_device_id": "system", 00:04:07.130 "dma_device_type": 1 00:04:07.130 }, 00:04:07.130 { 00:04:07.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.130 "dma_device_type": 2 00:04:07.130 } 00:04:07.130 ], 00:04:07.130 "driver_specific": { 00:04:07.130 "passthru": { 00:04:07.130 "name": "Passthru0", 00:04:07.130 "base_bdev_name": "Malloc2" 00:04:07.130 } 00:04:07.130 } 00:04:07.130 } 00:04:07.130 ]' 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.130 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.389 00:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.389 00:04:07.389 real 0m0.214s 00:04:07.389 user 0m0.146s 00:04:07.389 sys 0m0.013s 00:04:07.389 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.389 00:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.389 ************************************ 00:04:07.389 END TEST rpc_daemon_integrity 00:04:07.389 ************************************ 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:07.389 00:57:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.389 00:57:23 rpc -- rpc/rpc.sh@84 -- # killprocess 4026002 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 4026002 ']' 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@952 -- # kill -0 4026002 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@953 -- # uname 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026002 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026002' 00:04:07.389 killing process with pid 4026002 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@967 -- # kill 4026002 00:04:07.389 00:57:23 rpc -- common/autotest_common.sh@972 -- # wait 4026002 00:04:07.647 00:04:07.647 real 0m1.864s 00:04:07.647 user 0m2.365s 00:04:07.647 sys 0m0.543s 00:04:07.647 00:57:23 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.647 00:57:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.647 ************************************ 00:04:07.647 END TEST rpc 00:04:07.647 ************************************ 00:04:07.647 00:57:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:07.647 00:57:23 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.647 00:57:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.647 00:57:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.647 00:57:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.905 ************************************ 00:04:07.905 START TEST skip_rpc 00:04:07.905 ************************************ 00:04:07.905 00:57:23 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.905 * Looking for test storage... 00:04:07.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.905 00:57:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.905 00:57:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:07.905 00:57:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:07.905 00:57:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.905 00:57:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.905 00:57:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.905 ************************************ 00:04:07.905 START TEST skip_rpc 00:04:07.905 ************************************ 00:04:07.905 00:57:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:07.905 00:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4026438 00:04:07.905 00:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:07.905 00:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.905 00:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:07.905 [2024-07-16 00:57:23.795069] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:07.905 [2024-07-16 00:57:23.795146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026438 ] 00:04:07.905 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.905 [2024-07-16 00:57:23.850412] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.163 [2024-07-16 00:57:23.952389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4026438 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 4026438 ']' 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 4026438 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026438 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026438' 00:04:13.449 killing process with pid 4026438 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 4026438 00:04:13.449 00:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 4026438 00:04:13.449 00:04:13.449 real 0m5.452s 00:04:13.449 user 0m5.162s 00:04:13.449 sys 0m0.292s 00:04:13.449 00:57:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.449 00:57:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.449 ************************************ 00:04:13.449 END TEST skip_rpc 00:04:13.449 ************************************ 00:04:13.449 00:57:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.449 00:57:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.449 00:57:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.449 00:57:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.449 00:57:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.449 ************************************ 00:04:13.449 START TEST skip_rpc_with_json 00:04:13.449 ************************************ 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4027128 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4027128 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 4027128 ']' 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.449 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.449 [2024-07-16 00:57:29.298487] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:13.449 [2024-07-16 00:57:29.298584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027128 ] 00:04:13.449 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.449 [2024-07-16 00:57:29.356056] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.707 [2024-07-16 00:57:29.467174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.966 [2024-07-16 00:57:29.712654] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:13.966 request: 00:04:13.966 { 00:04:13.966 "trtype": "tcp", 00:04:13.966 "method": "nvmf_get_transports", 00:04:13.966 "req_id": 1 00:04:13.966 } 00:04:13.966 Got JSON-RPC error response 00:04:13.966 response: 00:04:13.966 { 00:04:13.966 "code": -19, 00:04:13.966 "message": "No such device" 00:04:13.966 } 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.966 [2024-07-16 00:57:29.720756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.966 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.966 { 00:04:13.966 "subsystems": [ 00:04:13.966 { 00:04:13.966 "subsystem": "vfio_user_target", 00:04:13.966 "config": null 00:04:13.966 }, 00:04:13.966 { 00:04:13.966 "subsystem": "keyring", 00:04:13.966 "config": [] 00:04:13.966 }, 00:04:13.966 { 00:04:13.966 "subsystem": "iobuf", 00:04:13.966 "config": [ 00:04:13.966 { 00:04:13.966 "method": "iobuf_set_options", 00:04:13.966 "params": { 00:04:13.966 "small_pool_count": 8192, 00:04:13.966 "large_pool_count": 1024, 00:04:13.966 "small_bufsize": 8192, 00:04:13.966 "large_bufsize": 135168 00:04:13.966 } 00:04:13.966 } 00:04:13.966 ] 00:04:13.966 }, 00:04:13.966 { 00:04:13.966 "subsystem": "sock", 00:04:13.966 "config": [ 00:04:13.966 { 00:04:13.966 "method": "sock_set_default_impl", 00:04:13.966 "params": { 00:04:13.966 "impl_name": "posix" 00:04:13.966 } 00:04:13.966 }, 00:04:13.966 { 00:04:13.966 "method": "sock_impl_set_options", 00:04:13.966 "params": { 00:04:13.966 "impl_name": "ssl", 00:04:13.966 "recv_buf_size": 4096, 00:04:13.966 "send_buf_size": 4096, 00:04:13.966 "enable_recv_pipe": true, 00:04:13.966 "enable_quickack": false, 00:04:13.966 "enable_placement_id": 0, 00:04:13.966 "enable_zerocopy_send_server": true, 00:04:13.966 "enable_zerocopy_send_client": false, 00:04:13.966 "zerocopy_threshold": 0, 00:04:13.966 "tls_version": 0, 00:04:13.966 "enable_ktls": false 00:04:13.966 } 00:04:13.966 }, 00:04:13.966 { 00:04:13.966 "method": "sock_impl_set_options", 00:04:13.966 "params": { 00:04:13.966 "impl_name": "posix", 00:04:13.966 "recv_buf_size": 2097152, 00:04:13.966 "send_buf_size": 2097152, 00:04:13.966 "enable_recv_pipe": true, 00:04:13.966 "enable_quickack": false, 00:04:13.966 "enable_placement_id": 0, 00:04:13.966 "enable_zerocopy_send_server": true, 00:04:13.966 "enable_zerocopy_send_client": false, 00:04:13.966 "zerocopy_threshold": 0, 00:04:13.966 "tls_version": 0, 00:04:13.966 "enable_ktls": false 00:04:13.966 } 00:04:13.966 } 00:04:13.966 ] 00:04:13.966 }, 00:04:13.966 { 00:04:13.966 "subsystem": "vmd", 00:04:13.966 "config": [] 00:04:13.966 }, 00:04:13.966 { 00:04:13.966 "subsystem": "accel", 00:04:13.966 "config": [ 00:04:13.966 { 00:04:13.966 "method": "accel_set_options", 00:04:13.966 "params": { 00:04:13.966 "small_cache_size": 128, 00:04:13.966 "large_cache_size": 16, 00:04:13.966 "task_count": 2048, 00:04:13.967 "sequence_count": 2048, 00:04:13.967 "buf_count": 2048 00:04:13.967 } 00:04:13.967 } 00:04:13.967 ] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "bdev", 00:04:13.967 "config": [ 00:04:13.967 { 00:04:13.967 "method": "bdev_set_options", 00:04:13.967 "params": { 00:04:13.967 "bdev_io_pool_size": 65535, 00:04:13.967 "bdev_io_cache_size": 256, 00:04:13.967 "bdev_auto_examine": true, 00:04:13.967 "iobuf_small_cache_size": 128, 00:04:13.967 "iobuf_large_cache_size": 16 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "bdev_raid_set_options", 00:04:13.967 "params": { 00:04:13.967 "process_window_size_kb": 1024 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "bdev_iscsi_set_options", 00:04:13.967 "params": { 00:04:13.967 "timeout_sec": 30 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "bdev_nvme_set_options", 00:04:13.967 "params": { 00:04:13.967 "action_on_timeout": "none", 00:04:13.967 "timeout_us": 0, 00:04:13.967 "timeout_admin_us": 0, 00:04:13.967 "keep_alive_timeout_ms": 10000, 00:04:13.967 "arbitration_burst": 0, 00:04:13.967 "low_priority_weight": 0, 00:04:13.967 "medium_priority_weight": 0, 00:04:13.967 "high_priority_weight": 0, 00:04:13.967 "nvme_adminq_poll_period_us": 10000, 00:04:13.967 "nvme_ioq_poll_period_us": 0, 00:04:13.967 "io_queue_requests": 0, 00:04:13.967 "delay_cmd_submit": true, 00:04:13.967 "transport_retry_count": 4, 00:04:13.967 "bdev_retry_count": 3, 00:04:13.967 "transport_ack_timeout": 0, 00:04:13.967 "ctrlr_loss_timeout_sec": 0, 00:04:13.967 "reconnect_delay_sec": 0, 00:04:13.967 "fast_io_fail_timeout_sec": 0, 00:04:13.967 "disable_auto_failback": false, 00:04:13.967 "generate_uuids": false, 00:04:13.967 "transport_tos": 0, 00:04:13.967 "nvme_error_stat": false, 00:04:13.967 "rdma_srq_size": 0, 00:04:13.967 "io_path_stat": false, 00:04:13.967 "allow_accel_sequence": false, 00:04:13.967 "rdma_max_cq_size": 0, 00:04:13.967 "rdma_cm_event_timeout_ms": 0, 00:04:13.967 "dhchap_digests": [ 00:04:13.967 "sha256", 00:04:13.967 "sha384", 00:04:13.967 "sha512" 00:04:13.967 ], 00:04:13.967 "dhchap_dhgroups": [ 00:04:13.967 "null", 00:04:13.967 "ffdhe2048", 00:04:13.967 "ffdhe3072", 00:04:13.967 "ffdhe4096", 00:04:13.967 "ffdhe6144", 00:04:13.967 "ffdhe8192" 00:04:13.967 ] 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "bdev_nvme_set_hotplug", 00:04:13.967 "params": { 00:04:13.967 "period_us": 100000, 00:04:13.967 "enable": false 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "bdev_wait_for_examine" 00:04:13.967 } 00:04:13.967 ] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "scsi", 00:04:13.967 "config": null 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "scheduler", 00:04:13.967 "config": [ 00:04:13.967 { 00:04:13.967 "method": "framework_set_scheduler", 00:04:13.967 "params": { 00:04:13.967 "name": "static" 00:04:13.967 } 00:04:13.967 } 00:04:13.967 ] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "vhost_scsi", 00:04:13.967 "config": [] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "vhost_blk", 00:04:13.967 "config": [] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "ublk", 00:04:13.967 "config": [] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "nbd", 00:04:13.967 "config": [] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "nvmf", 00:04:13.967 "config": [ 00:04:13.967 { 00:04:13.967 "method": "nvmf_set_config", 00:04:13.967 "params": { 00:04:13.967 "discovery_filter": "match_any", 00:04:13.967 "admin_cmd_passthru": { 00:04:13.967 "identify_ctrlr": false 00:04:13.967 } 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "nvmf_set_max_subsystems", 00:04:13.967 "params": { 00:04:13.967 "max_subsystems": 1024 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "nvmf_set_crdt", 00:04:13.967 "params": { 00:04:13.967 "crdt1": 0, 00:04:13.967 "crdt2": 0, 00:04:13.967 "crdt3": 0 00:04:13.967 } 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "method": "nvmf_create_transport", 00:04:13.967 "params": { 00:04:13.967 "trtype": "TCP", 00:04:13.967 "max_queue_depth": 128, 00:04:13.967 "max_io_qpairs_per_ctrlr": 127, 00:04:13.967 "in_capsule_data_size": 4096, 00:04:13.967 "max_io_size": 131072, 00:04:13.967 "io_unit_size": 131072, 00:04:13.967 "max_aq_depth": 128, 00:04:13.967 "num_shared_buffers": 511, 00:04:13.967 "buf_cache_size": 4294967295, 00:04:13.967 "dif_insert_or_strip": false, 00:04:13.967 "zcopy": false, 00:04:13.967 "c2h_success": true, 00:04:13.967 "sock_priority": 0, 00:04:13.967 "abort_timeout_sec": 1, 00:04:13.967 "ack_timeout": 0, 00:04:13.967 "data_wr_pool_size": 0 00:04:13.967 } 00:04:13.967 } 00:04:13.967 ] 00:04:13.967 }, 00:04:13.967 { 00:04:13.967 "subsystem": "iscsi", 00:04:13.967 "config": [ 00:04:13.967 { 00:04:13.967 "method": "iscsi_set_options", 00:04:13.967 "params": { 00:04:13.967 "node_base": "iqn.2016-06.io.spdk", 00:04:13.967 "max_sessions": 128, 00:04:13.967 "max_connections_per_session": 2, 00:04:13.967 "max_queue_depth": 64, 00:04:13.967 "default_time2wait": 2, 00:04:13.967 "default_time2retain": 20, 00:04:13.967 "first_burst_length": 8192, 00:04:13.967 "immediate_data": true, 00:04:13.967 "allow_duplicated_isid": false, 00:04:13.967 "error_recovery_level": 0, 00:04:13.967 "nop_timeout": 60, 00:04:13.967 "nop_in_interval": 30, 00:04:13.967 "disable_chap": false, 00:04:13.967 "require_chap": false, 00:04:13.967 "mutual_chap": false, 00:04:13.967 "chap_group": 0, 00:04:13.967 "max_large_datain_per_connection": 64, 00:04:13.967 "max_r2t_per_connection": 4, 00:04:13.967 "pdu_pool_size": 36864, 00:04:13.967 "immediate_data_pool_size": 16384, 00:04:13.967 "data_out_pool_size": 2048 00:04:13.967 } 00:04:13.967 } 00:04:13.967 ] 00:04:13.967 } 00:04:13.967 ] 00:04:13.967 } 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4027128 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4027128 ']' 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4027128 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027128 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027128' 00:04:13.967 killing process with pid 4027128 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4027128 00:04:13.967 00:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4027128 00:04:14.533 00:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4027266 00:04:14.533 00:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.533 00:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4027266 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4027266 ']' 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4027266 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027266 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027266' 00:04:19.792 killing process with pid 4027266 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4027266 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4027266 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.792 00:04:19.792 real 0m6.540s 00:04:19.792 user 0m6.171s 00:04:19.792 sys 0m0.652s 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.792 00:57:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.792 ************************************ 00:04:19.792 END TEST skip_rpc_with_json 00:04:19.792 ************************************ 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.050 00:57:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.050 ************************************ 00:04:20.050 START TEST skip_rpc_with_delay 00:04:20.050 ************************************ 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.050 [2024-07-16 00:57:35.891050] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.050 [2024-07-16 00:57:35.891159] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:20.050 00:04:20.050 real 0m0.070s 00:04:20.050 user 0m0.046s 00:04:20.050 sys 0m0.024s 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.050 00:57:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.050 ************************************ 00:04:20.050 END TEST skip_rpc_with_delay 00:04:20.050 ************************************ 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.050 00:57:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.050 00:57:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.050 00:57:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.050 00:57:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.050 ************************************ 00:04:20.050 START TEST exit_on_failed_rpc_init 00:04:20.050 ************************************ 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4027984 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4027984 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 4027984 ']' 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.050 00:57:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.050 [2024-07-16 00:57:36.010238] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:20.050 [2024-07-16 00:57:36.010344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027984 ] 00:04:20.050 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.308 [2024-07-16 00:57:36.068547] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.308 [2024-07-16 00:57:36.178093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.566 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.566 [2024-07-16 00:57:36.470643] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:20.566 [2024-07-16 00:57:36.470729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027995 ] 00:04:20.566 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.566 [2024-07-16 00:57:36.527261] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.824 [2024-07-16 00:57:36.637273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.824 [2024-07-16 00:57:36.637391] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:20.824 [2024-07-16 00:57:36.637410] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:20.824 [2024-07-16 00:57:36.637420] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4027984 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 4027984 ']' 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 4027984 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027984 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027984' 00:04:20.824 killing process with pid 4027984 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 4027984 00:04:20.824 00:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 4027984 00:04:21.390 00:04:21.390 real 0m1.262s 00:04:21.390 user 0m1.447s 00:04:21.390 sys 0m0.424s 00:04:21.390 00:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.390 00:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.390 ************************************ 00:04:21.390 END TEST exit_on_failed_rpc_init 00:04:21.390 ************************************ 00:04:21.390 00:57:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.390 00:57:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.390 00:04:21.390 real 0m13.583s 00:04:21.390 user 0m12.938s 00:04:21.390 sys 0m1.558s 00:04:21.390 00:57:37 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.390 00:57:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.390 ************************************ 00:04:21.390 END TEST skip_rpc 00:04:21.390 ************************************ 00:04:21.390 00:57:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.390 00:57:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.390 00:57:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.390 00:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.390 00:57:37 -- common/autotest_common.sh@10 -- # set +x 00:04:21.390 ************************************ 00:04:21.390 START TEST rpc_client 00:04:21.390 ************************************ 00:04:21.390 00:57:37 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.390 * Looking for test storage... 00:04:21.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:21.390 00:57:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:21.390 OK 00:04:21.390 00:57:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:21.390 00:04:21.390 real 0m0.065s 00:04:21.390 user 0m0.027s 00:04:21.390 sys 0m0.043s 00:04:21.390 00:57:37 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.390 00:57:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:21.390 ************************************ 00:04:21.390 END TEST rpc_client 00:04:21.390 ************************************ 00:04:21.390 00:57:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.390 00:57:37 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.390 00:57:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.390 00:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.390 00:57:37 -- common/autotest_common.sh@10 -- # set +x 00:04:21.650 ************************************ 00:04:21.650 START TEST json_config 00:04:21.650 ************************************ 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:21.650 00:57:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.650 00:57:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.650 00:57:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.650 00:57:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.650 00:57:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.650 00:57:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.650 00:57:37 json_config -- paths/export.sh@5 -- # export PATH 00:04:21.650 00:57:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@47 -- # : 0 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:21.650 00:57:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:21.650 INFO: JSON configuration test init 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.650 00:57:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:21.650 00:57:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:21.650 00:57:37 json_config -- json_config/common.sh@10 -- # shift 00:04:21.650 00:57:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.650 00:57:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.650 00:57:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.650 00:57:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.650 00:57:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.650 00:57:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4028239 00:04:21.650 00:57:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:21.650 00:57:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.650 Waiting for target to run... 00:04:21.650 00:57:37 json_config -- json_config/common.sh@25 -- # waitforlisten 4028239 /var/tmp/spdk_tgt.sock 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 4028239 ']' 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.650 00:57:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.650 [2024-07-16 00:57:37.499521] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:21.650 [2024-07-16 00:57:37.499605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028239 ] 00:04:21.650 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.910 [2024-07-16 00:57:37.831968] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.168 [2024-07-16 00:57:37.914851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.735 00:57:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:22.735 00:57:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:22.735 00:57:38 json_config -- json_config/common.sh@26 -- # echo '' 00:04:22.735 00:04:22.735 00:57:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:22.735 00:57:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:22.735 00:57:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.735 00:57:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.735 00:57:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:22.735 00:57:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:22.735 00:57:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.735 00:57:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.735 00:57:38 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:22.735 00:57:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:22.735 00:57:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.016 00:57:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.016 00:57:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:26.016 00:57:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:26.016 00:57:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.016 00:57:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:26.016 00:57:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.016 00:57:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:26.016 00:57:41 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.016 00:57:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.274 MallocForNvmf0 00:04:26.274 00:57:42 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.274 00:57:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.532 MallocForNvmf1 00:04:26.532 00:57:42 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.532 00:57:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.811 [2024-07-16 00:57:42.590321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.811 00:57:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:26.811 00:57:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.073 00:57:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.073 00:57:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.330 00:57:43 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.330 00:57:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.588 00:57:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.588 00:57:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.588 [2024-07-16 00:57:43.573575] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.845 00:57:43 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:27.845 00:57:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.845 00:57:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.845 00:57:43 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:27.845 00:57:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.845 00:57:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.845 00:57:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:27.845 00:57:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.845 00:57:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.102 MallocBdevForConfigChangeCheck 00:04:28.102 00:57:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:28.102 00:57:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.102 00:57:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.102 00:57:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:28.102 00:57:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.360 00:57:44 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:28.360 INFO: shutting down applications... 00:04:28.360 00:57:44 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:28.360 00:57:44 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:28.360 00:57:44 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:28.360 00:57:44 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:30.257 Calling clear_iscsi_subsystem 00:04:30.257 Calling clear_nvmf_subsystem 00:04:30.257 Calling clear_nbd_subsystem 00:04:30.257 Calling clear_ublk_subsystem 00:04:30.257 Calling clear_vhost_blk_subsystem 00:04:30.257 Calling clear_vhost_scsi_subsystem 00:04:30.257 Calling clear_bdev_subsystem 00:04:30.257 00:57:45 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:30.257 00:57:45 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:30.257 00:57:45 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:30.257 00:57:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.257 00:57:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:30.257 00:57:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:30.257 00:57:46 json_config -- json_config/json_config.sh@345 -- # break 00:04:30.257 00:57:46 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:30.257 00:57:46 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:30.257 00:57:46 json_config -- json_config/common.sh@31 -- # local app=target 00:04:30.257 00:57:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:30.257 00:57:46 json_config -- json_config/common.sh@35 -- # [[ -n 4028239 ]] 00:04:30.257 00:57:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4028239 00:04:30.257 00:57:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:30.257 00:57:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.257 00:57:46 json_config -- json_config/common.sh@41 -- # kill -0 4028239 00:04:30.257 00:57:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.823 00:57:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.824 00:57:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.824 00:57:46 json_config -- json_config/common.sh@41 -- # kill -0 4028239 00:04:30.824 00:57:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.824 00:57:46 json_config -- json_config/common.sh@43 -- # break 00:04:30.824 00:57:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.824 00:57:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.824 SPDK target shutdown done 00:04:30.824 00:57:46 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:30.824 INFO: relaunching applications... 00:04:30.824 00:57:46 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.824 00:57:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.824 00:57:46 json_config -- json_config/common.sh@10 -- # shift 00:04:30.824 00:57:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.824 00:57:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.824 00:57:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.824 00:57:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.824 00:57:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.824 00:57:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4029432 00:04:30.824 00:57:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.824 00:57:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.824 Waiting for target to run... 00:04:30.824 00:57:46 json_config -- json_config/common.sh@25 -- # waitforlisten 4029432 /var/tmp/spdk_tgt.sock 00:04:30.824 00:57:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 4029432 ']' 00:04:30.824 00:57:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.824 00:57:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.824 00:57:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.824 00:57:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.824 00:57:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.824 [2024-07-16 00:57:46.800883] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:30.824 [2024-07-16 00:57:46.800980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029432 ] 00:04:31.082 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.340 [2024-07-16 00:57:47.325080] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.599 [2024-07-16 00:57:47.419614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.880 [2024-07-16 00:57:50.456081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.880 [2024-07-16 00:57:50.488527] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:35.445 00:57:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.445 00:57:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:35.445 00:57:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:35.445 00:04:35.445 00:57:51 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:35.445 00:57:51 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:35.445 INFO: Checking if target configuration is the same... 00:04:35.445 00:57:51 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.445 00:57:51 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:35.445 00:57:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.445 + '[' 2 -ne 2 ']' 00:04:35.445 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.445 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.445 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.445 +++ basename /dev/fd/62 00:04:35.445 ++ mktemp /tmp/62.XXX 00:04:35.445 + tmp_file_1=/tmp/62.4pO 00:04:35.445 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.445 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.445 + tmp_file_2=/tmp/spdk_tgt_config.json.kD9 00:04:35.445 + ret=0 00:04:35.445 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.702 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.702 + diff -u /tmp/62.4pO /tmp/spdk_tgt_config.json.kD9 00:04:35.702 + echo 'INFO: JSON config files are the same' 00:04:35.702 INFO: JSON config files are the same 00:04:35.702 + rm /tmp/62.4pO /tmp/spdk_tgt_config.json.kD9 00:04:35.702 + exit 0 00:04:35.702 00:57:51 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:35.702 00:57:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.702 INFO: changing configuration and checking if this can be detected... 00:04:35.702 00:57:51 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.702 00:57:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.959 00:57:51 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.959 00:57:51 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:35.959 00:57:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.959 + '[' 2 -ne 2 ']' 00:04:35.959 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.959 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.959 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.959 +++ basename /dev/fd/62 00:04:35.959 ++ mktemp /tmp/62.XXX 00:04:35.959 + tmp_file_1=/tmp/62.99l 00:04:35.959 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.959 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.959 + tmp_file_2=/tmp/spdk_tgt_config.json.sNr 00:04:35.959 + ret=0 00:04:35.959 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.523 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.523 + diff -u /tmp/62.99l /tmp/spdk_tgt_config.json.sNr 00:04:36.523 + ret=1 00:04:36.523 + echo '=== Start of file: /tmp/62.99l ===' 00:04:36.523 + cat /tmp/62.99l 00:04:36.523 + echo '=== End of file: /tmp/62.99l ===' 00:04:36.523 + echo '' 00:04:36.523 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sNr ===' 00:04:36.523 + cat /tmp/spdk_tgt_config.json.sNr 00:04:36.523 + echo '=== End of file: /tmp/spdk_tgt_config.json.sNr ===' 00:04:36.523 + echo '' 00:04:36.523 + rm /tmp/62.99l /tmp/spdk_tgt_config.json.sNr 00:04:36.523 + exit 1 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:36.523 INFO: configuration change detected. 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@317 -- # [[ -n 4029432 ]] 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.523 00:57:52 json_config -- json_config/json_config.sh@323 -- # killprocess 4029432 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@948 -- # '[' -z 4029432 ']' 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@952 -- # kill -0 4029432 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@953 -- # uname 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4029432 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4029432' 00:04:36.523 killing process with pid 4029432 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@967 -- # kill 4029432 00:04:36.523 00:57:52 json_config -- common/autotest_common.sh@972 -- # wait 4029432 00:04:38.414 00:57:53 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.414 00:57:53 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:38.414 00:57:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.414 00:57:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.415 00:57:53 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:38.415 00:57:53 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:38.415 INFO: Success 00:04:38.415 00:04:38.415 real 0m16.598s 00:04:38.415 user 0m18.529s 00:04:38.415 sys 0m2.041s 00:04:38.415 00:57:53 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.415 00:57:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.415 ************************************ 00:04:38.415 END TEST json_config 00:04:38.415 ************************************ 00:04:38.415 00:57:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.415 00:57:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.415 00:57:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.415 00:57:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.415 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:04:38.415 ************************************ 00:04:38.415 START TEST json_config_extra_key 00:04:38.415 ************************************ 00:04:38.415 00:57:54 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.415 00:57:54 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.415 00:57:54 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.415 00:57:54 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.415 00:57:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.415 00:57:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.415 00:57:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.415 00:57:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:38.415 00:57:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:38.415 00:57:54 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:38.415 INFO: launching applications... 00:04:38.415 00:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4030463 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.415 Waiting for target to run... 00:04:38.415 00:57:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4030463 /var/tmp/spdk_tgt.sock 00:04:38.415 00:57:54 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 4030463 ']' 00:04:38.415 00:57:54 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.415 00:57:54 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.415 00:57:54 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.415 00:57:54 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.415 00:57:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.415 [2024-07-16 00:57:54.169711] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:38.415 [2024-07-16 00:57:54.169796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030463 ] 00:04:38.415 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.672 [2024-07-16 00:57:54.499738] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.672 [2024-07-16 00:57:54.585062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.236 00:57:55 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.236 00:57:55 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:39.236 00:04:39.236 00:57:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:39.236 INFO: shutting down applications... 00:04:39.236 00:57:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4030463 ]] 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4030463 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4030463 00:04:39.236 00:57:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.800 00:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.800 00:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.800 00:57:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4030463 00:04:39.800 00:57:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.800 00:57:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:39.800 00:57:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.800 00:57:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.800 SPDK target shutdown done 00:04:39.800 00:57:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:39.800 Success 00:04:39.800 00:04:39.800 real 0m1.566s 00:04:39.800 user 0m1.570s 00:04:39.800 sys 0m0.430s 00:04:39.800 00:57:55 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.800 00:57:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.800 ************************************ 00:04:39.800 END TEST json_config_extra_key 00:04:39.800 ************************************ 00:04:39.800 00:57:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.800 00:57:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.800 00:57:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.800 00:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.800 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:39.800 ************************************ 00:04:39.800 START TEST alias_rpc 00:04:39.800 ************************************ 00:04:39.800 00:57:55 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.800 * Looking for test storage... 00:04:39.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:39.800 00:57:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:39.800 00:57:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4030662 00:04:39.800 00:57:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.800 00:57:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4030662 00:04:39.800 00:57:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 4030662 ']' 00:04:39.800 00:57:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.800 00:57:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.800 00:57:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.800 00:57:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.800 00:57:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.800 [2024-07-16 00:57:55.776780] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:39.800 [2024-07-16 00:57:55.776872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030662 ] 00:04:40.059 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.059 [2024-07-16 00:57:55.836706] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.059 [2024-07-16 00:57:55.942156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.317 00:57:56 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.317 00:57:56 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:40.317 00:57:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:40.574 00:57:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4030662 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 4030662 ']' 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 4030662 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4030662 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4030662' 00:04:40.574 killing process with pid 4030662 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@967 -- # kill 4030662 00:04:40.574 00:57:56 alias_rpc -- common/autotest_common.sh@972 -- # wait 4030662 00:04:41.140 00:04:41.140 real 0m1.230s 00:04:41.140 user 0m1.325s 00:04:41.140 sys 0m0.400s 00:04:41.140 00:57:56 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.140 00:57:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.140 ************************************ 00:04:41.140 END TEST alias_rpc 00:04:41.140 ************************************ 00:04:41.140 00:57:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.140 00:57:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:41.140 00:57:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:41.140 00:57:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.140 00:57:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.140 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:41.140 ************************************ 00:04:41.140 START TEST spdkcli_tcp 00:04:41.140 ************************************ 00:04:41.140 00:57:56 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:41.140 * Looking for test storage... 00:04:41.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4030941 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:41.140 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4030941 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 4030941 ']' 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.140 00:57:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.140 [2024-07-16 00:57:57.059186] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:41.140 [2024-07-16 00:57:57.059290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030941 ] 00:04:41.140 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.140 [2024-07-16 00:57:57.116835] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.399 [2024-07-16 00:57:57.231179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.399 [2024-07-16 00:57:57.231183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.657 00:57:57 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.657 00:57:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:41.657 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4030973 00:04:41.657 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.657 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:41.916 [ 00:04:41.916 "bdev_malloc_delete", 00:04:41.916 "bdev_malloc_create", 00:04:41.916 "bdev_null_resize", 00:04:41.916 "bdev_null_delete", 00:04:41.916 "bdev_null_create", 00:04:41.916 "bdev_nvme_cuse_unregister", 00:04:41.916 "bdev_nvme_cuse_register", 00:04:41.916 "bdev_opal_new_user", 00:04:41.916 "bdev_opal_set_lock_state", 00:04:41.916 "bdev_opal_delete", 00:04:41.916 "bdev_opal_get_info", 00:04:41.916 "bdev_opal_create", 00:04:41.916 "bdev_nvme_opal_revert", 00:04:41.916 "bdev_nvme_opal_init", 00:04:41.916 "bdev_nvme_send_cmd", 00:04:41.916 "bdev_nvme_get_path_iostat", 00:04:41.916 "bdev_nvme_get_mdns_discovery_info", 00:04:41.916 "bdev_nvme_stop_mdns_discovery", 00:04:41.916 "bdev_nvme_start_mdns_discovery", 00:04:41.916 "bdev_nvme_set_multipath_policy", 00:04:41.916 "bdev_nvme_set_preferred_path", 00:04:41.916 "bdev_nvme_get_io_paths", 00:04:41.916 "bdev_nvme_remove_error_injection", 00:04:41.916 "bdev_nvme_add_error_injection", 00:04:41.916 "bdev_nvme_get_discovery_info", 00:04:41.916 "bdev_nvme_stop_discovery", 00:04:41.916 "bdev_nvme_start_discovery", 00:04:41.916 "bdev_nvme_get_controller_health_info", 00:04:41.916 "bdev_nvme_disable_controller", 00:04:41.916 "bdev_nvme_enable_controller", 00:04:41.916 "bdev_nvme_reset_controller", 00:04:41.916 "bdev_nvme_get_transport_statistics", 00:04:41.916 "bdev_nvme_apply_firmware", 00:04:41.916 "bdev_nvme_detach_controller", 00:04:41.916 "bdev_nvme_get_controllers", 00:04:41.916 "bdev_nvme_attach_controller", 00:04:41.916 "bdev_nvme_set_hotplug", 00:04:41.916 "bdev_nvme_set_options", 00:04:41.916 "bdev_passthru_delete", 00:04:41.916 "bdev_passthru_create", 00:04:41.916 "bdev_lvol_set_parent_bdev", 00:04:41.916 "bdev_lvol_set_parent", 00:04:41.916 "bdev_lvol_check_shallow_copy", 00:04:41.916 "bdev_lvol_start_shallow_copy", 00:04:41.916 "bdev_lvol_grow_lvstore", 00:04:41.916 "bdev_lvol_get_lvols", 00:04:41.916 "bdev_lvol_get_lvstores", 00:04:41.916 "bdev_lvol_delete", 00:04:41.916 "bdev_lvol_set_read_only", 00:04:41.916 "bdev_lvol_resize", 00:04:41.916 "bdev_lvol_decouple_parent", 00:04:41.916 "bdev_lvol_inflate", 00:04:41.916 "bdev_lvol_rename", 00:04:41.916 "bdev_lvol_clone_bdev", 00:04:41.916 "bdev_lvol_clone", 00:04:41.916 "bdev_lvol_snapshot", 00:04:41.916 "bdev_lvol_create", 00:04:41.916 "bdev_lvol_delete_lvstore", 00:04:41.916 "bdev_lvol_rename_lvstore", 00:04:41.916 "bdev_lvol_create_lvstore", 00:04:41.916 "bdev_raid_set_options", 00:04:41.916 "bdev_raid_remove_base_bdev", 00:04:41.916 "bdev_raid_add_base_bdev", 00:04:41.916 "bdev_raid_delete", 00:04:41.916 "bdev_raid_create", 00:04:41.916 "bdev_raid_get_bdevs", 00:04:41.916 "bdev_error_inject_error", 00:04:41.916 "bdev_error_delete", 00:04:41.916 "bdev_error_create", 00:04:41.916 "bdev_split_delete", 00:04:41.916 "bdev_split_create", 00:04:41.916 "bdev_delay_delete", 00:04:41.916 "bdev_delay_create", 00:04:41.916 "bdev_delay_update_latency", 00:04:41.916 "bdev_zone_block_delete", 00:04:41.916 "bdev_zone_block_create", 00:04:41.916 "blobfs_create", 00:04:41.916 "blobfs_detect", 00:04:41.916 "blobfs_set_cache_size", 00:04:41.916 "bdev_aio_delete", 00:04:41.916 "bdev_aio_rescan", 00:04:41.916 "bdev_aio_create", 00:04:41.916 "bdev_ftl_set_property", 00:04:41.916 "bdev_ftl_get_properties", 00:04:41.916 "bdev_ftl_get_stats", 00:04:41.916 "bdev_ftl_unmap", 00:04:41.916 "bdev_ftl_unload", 00:04:41.916 "bdev_ftl_delete", 00:04:41.916 "bdev_ftl_load", 00:04:41.916 "bdev_ftl_create", 00:04:41.916 "bdev_virtio_attach_controller", 00:04:41.916 "bdev_virtio_scsi_get_devices", 00:04:41.916 "bdev_virtio_detach_controller", 00:04:41.916 "bdev_virtio_blk_set_hotplug", 00:04:41.916 "bdev_iscsi_delete", 00:04:41.916 "bdev_iscsi_create", 00:04:41.916 "bdev_iscsi_set_options", 00:04:41.916 "accel_error_inject_error", 00:04:41.916 "ioat_scan_accel_module", 00:04:41.916 "dsa_scan_accel_module", 00:04:41.916 "iaa_scan_accel_module", 00:04:41.916 "vfu_virtio_create_scsi_endpoint", 00:04:41.916 "vfu_virtio_scsi_remove_target", 00:04:41.916 "vfu_virtio_scsi_add_target", 00:04:41.916 "vfu_virtio_create_blk_endpoint", 00:04:41.916 "vfu_virtio_delete_endpoint", 00:04:41.916 "keyring_file_remove_key", 00:04:41.916 "keyring_file_add_key", 00:04:41.916 "keyring_linux_set_options", 00:04:41.916 "iscsi_get_histogram", 00:04:41.916 "iscsi_enable_histogram", 00:04:41.916 "iscsi_set_options", 00:04:41.916 "iscsi_get_auth_groups", 00:04:41.916 "iscsi_auth_group_remove_secret", 00:04:41.916 "iscsi_auth_group_add_secret", 00:04:41.916 "iscsi_delete_auth_group", 00:04:41.916 "iscsi_create_auth_group", 00:04:41.916 "iscsi_set_discovery_auth", 00:04:41.916 "iscsi_get_options", 00:04:41.916 "iscsi_target_node_request_logout", 00:04:41.916 "iscsi_target_node_set_redirect", 00:04:41.916 "iscsi_target_node_set_auth", 00:04:41.916 "iscsi_target_node_add_lun", 00:04:41.916 "iscsi_get_stats", 00:04:41.916 "iscsi_get_connections", 00:04:41.916 "iscsi_portal_group_set_auth", 00:04:41.916 "iscsi_start_portal_group", 00:04:41.916 "iscsi_delete_portal_group", 00:04:41.916 "iscsi_create_portal_group", 00:04:41.916 "iscsi_get_portal_groups", 00:04:41.916 "iscsi_delete_target_node", 00:04:41.916 "iscsi_target_node_remove_pg_ig_maps", 00:04:41.916 "iscsi_target_node_add_pg_ig_maps", 00:04:41.916 "iscsi_create_target_node", 00:04:41.916 "iscsi_get_target_nodes", 00:04:41.916 "iscsi_delete_initiator_group", 00:04:41.916 "iscsi_initiator_group_remove_initiators", 00:04:41.916 "iscsi_initiator_group_add_initiators", 00:04:41.916 "iscsi_create_initiator_group", 00:04:41.916 "iscsi_get_initiator_groups", 00:04:41.916 "nvmf_set_crdt", 00:04:41.916 "nvmf_set_config", 00:04:41.916 "nvmf_set_max_subsystems", 00:04:41.916 "nvmf_stop_mdns_prr", 00:04:41.916 "nvmf_publish_mdns_prr", 00:04:41.916 "nvmf_subsystem_get_listeners", 00:04:41.916 "nvmf_subsystem_get_qpairs", 00:04:41.916 "nvmf_subsystem_get_controllers", 00:04:41.916 "nvmf_get_stats", 00:04:41.916 "nvmf_get_transports", 00:04:41.916 "nvmf_create_transport", 00:04:41.916 "nvmf_get_targets", 00:04:41.916 "nvmf_delete_target", 00:04:41.916 "nvmf_create_target", 00:04:41.916 "nvmf_subsystem_allow_any_host", 00:04:41.916 "nvmf_subsystem_remove_host", 00:04:41.916 "nvmf_subsystem_add_host", 00:04:41.916 "nvmf_ns_remove_host", 00:04:41.916 "nvmf_ns_add_host", 00:04:41.916 "nvmf_subsystem_remove_ns", 00:04:41.916 "nvmf_subsystem_add_ns", 00:04:41.916 "nvmf_subsystem_listener_set_ana_state", 00:04:41.916 "nvmf_discovery_get_referrals", 00:04:41.916 "nvmf_discovery_remove_referral", 00:04:41.916 "nvmf_discovery_add_referral", 00:04:41.916 "nvmf_subsystem_remove_listener", 00:04:41.916 "nvmf_subsystem_add_listener", 00:04:41.916 "nvmf_delete_subsystem", 00:04:41.916 "nvmf_create_subsystem", 00:04:41.916 "nvmf_get_subsystems", 00:04:41.916 "env_dpdk_get_mem_stats", 00:04:41.916 "nbd_get_disks", 00:04:41.916 "nbd_stop_disk", 00:04:41.916 "nbd_start_disk", 00:04:41.916 "ublk_recover_disk", 00:04:41.916 "ublk_get_disks", 00:04:41.916 "ublk_stop_disk", 00:04:41.916 "ublk_start_disk", 00:04:41.916 "ublk_destroy_target", 00:04:41.916 "ublk_create_target", 00:04:41.916 "virtio_blk_create_transport", 00:04:41.916 "virtio_blk_get_transports", 00:04:41.916 "vhost_controller_set_coalescing", 00:04:41.916 "vhost_get_controllers", 00:04:41.916 "vhost_delete_controller", 00:04:41.916 "vhost_create_blk_controller", 00:04:41.916 "vhost_scsi_controller_remove_target", 00:04:41.916 "vhost_scsi_controller_add_target", 00:04:41.916 "vhost_start_scsi_controller", 00:04:41.916 "vhost_create_scsi_controller", 00:04:41.916 "thread_set_cpumask", 00:04:41.916 "framework_get_governor", 00:04:41.916 "framework_get_scheduler", 00:04:41.916 "framework_set_scheduler", 00:04:41.916 "framework_get_reactors", 00:04:41.916 "thread_get_io_channels", 00:04:41.916 "thread_get_pollers", 00:04:41.916 "thread_get_stats", 00:04:41.916 "framework_monitor_context_switch", 00:04:41.916 "spdk_kill_instance", 00:04:41.916 "log_enable_timestamps", 00:04:41.916 "log_get_flags", 00:04:41.916 "log_clear_flag", 00:04:41.916 "log_set_flag", 00:04:41.916 "log_get_level", 00:04:41.916 "log_set_level", 00:04:41.916 "log_get_print_level", 00:04:41.916 "log_set_print_level", 00:04:41.916 "framework_enable_cpumask_locks", 00:04:41.916 "framework_disable_cpumask_locks", 00:04:41.916 "framework_wait_init", 00:04:41.916 "framework_start_init", 00:04:41.916 "scsi_get_devices", 00:04:41.916 "bdev_get_histogram", 00:04:41.916 "bdev_enable_histogram", 00:04:41.916 "bdev_set_qos_limit", 00:04:41.916 "bdev_set_qd_sampling_period", 00:04:41.916 "bdev_get_bdevs", 00:04:41.916 "bdev_reset_iostat", 00:04:41.916 "bdev_get_iostat", 00:04:41.916 "bdev_examine", 00:04:41.916 "bdev_wait_for_examine", 00:04:41.916 "bdev_set_options", 00:04:41.916 "notify_get_notifications", 00:04:41.916 "notify_get_types", 00:04:41.917 "accel_get_stats", 00:04:41.917 "accel_set_options", 00:04:41.917 "accel_set_driver", 00:04:41.917 "accel_crypto_key_destroy", 00:04:41.917 "accel_crypto_keys_get", 00:04:41.917 "accel_crypto_key_create", 00:04:41.917 "accel_assign_opc", 00:04:41.917 "accel_get_module_info", 00:04:41.917 "accel_get_opc_assignments", 00:04:41.917 "vmd_rescan", 00:04:41.917 "vmd_remove_device", 00:04:41.917 "vmd_enable", 00:04:41.917 "sock_get_default_impl", 00:04:41.917 "sock_set_default_impl", 00:04:41.917 "sock_impl_set_options", 00:04:41.917 "sock_impl_get_options", 00:04:41.917 "iobuf_get_stats", 00:04:41.917 "iobuf_set_options", 00:04:41.917 "keyring_get_keys", 00:04:41.917 "framework_get_pci_devices", 00:04:41.917 "framework_get_config", 00:04:41.917 "framework_get_subsystems", 00:04:41.917 "vfu_tgt_set_base_path", 00:04:41.917 "trace_get_info", 00:04:41.917 "trace_get_tpoint_group_mask", 00:04:41.917 "trace_disable_tpoint_group", 00:04:41.917 "trace_enable_tpoint_group", 00:04:41.917 "trace_clear_tpoint_mask", 00:04:41.917 "trace_set_tpoint_mask", 00:04:41.917 "spdk_get_version", 00:04:41.917 "rpc_get_methods" 00:04:41.917 ] 00:04:41.917 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.917 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:41.917 00:57:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4030941 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 4030941 ']' 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 4030941 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4030941 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4030941' 00:04:41.917 killing process with pid 4030941 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 4030941 00:04:41.917 00:57:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 4030941 00:04:42.484 00:04:42.484 real 0m1.244s 00:04:42.484 user 0m2.140s 00:04:42.484 sys 0m0.463s 00:04:42.484 00:57:58 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.484 00:57:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.484 ************************************ 00:04:42.484 END TEST spdkcli_tcp 00:04:42.484 ************************************ 00:04:42.484 00:57:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.484 00:57:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.484 00:57:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.484 00:57:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.484 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.484 ************************************ 00:04:42.484 START TEST dpdk_mem_utility 00:04:42.484 ************************************ 00:04:42.484 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.484 * Looking for test storage... 00:04:42.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:42.484 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:42.484 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4031171 00:04:42.484 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.484 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4031171 00:04:42.484 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 4031171 ']' 00:04:42.484 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.484 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.484 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.484 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.484 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.484 [2024-07-16 00:57:58.346532] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:42.484 [2024-07-16 00:57:58.346626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031171 ] 00:04:42.484 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.484 [2024-07-16 00:57:58.403075] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.742 [2024-07-16 00:57:58.508437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.999 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.999 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:42.999 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.999 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.999 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.999 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.999 { 00:04:42.999 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.999 } 00:04:42.999 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.999 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:42.999 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:42.999 1 heaps totaling size 814.000000 MiB 00:04:42.999 size: 814.000000 MiB heap id: 0 00:04:42.999 end heaps---------- 00:04:42.999 8 mempools totaling size 598.116089 MiB 00:04:42.999 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.999 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.999 size: 84.521057 MiB name: bdev_io_4031171 00:04:42.999 size: 51.011292 MiB name: evtpool_4031171 00:04:42.999 size: 50.003479 MiB name: msgpool_4031171 00:04:42.999 size: 21.763794 MiB name: PDU_Pool 00:04:42.999 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.999 size: 0.026123 MiB name: Session_Pool 00:04:42.999 end mempools------- 00:04:42.999 6 memzones totaling size 4.142822 MiB 00:04:42.999 size: 1.000366 MiB name: RG_ring_0_4031171 00:04:42.999 size: 1.000366 MiB name: RG_ring_1_4031171 00:04:42.999 size: 1.000366 MiB name: RG_ring_4_4031171 00:04:42.999 size: 1.000366 MiB name: RG_ring_5_4031171 00:04:42.999 size: 0.125366 MiB name: RG_ring_2_4031171 00:04:42.999 size: 0.015991 MiB name: RG_ring_3_4031171 00:04:42.999 end memzones------- 00:04:42.999 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.999 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:42.999 list of free elements. size: 12.519348 MiB 00:04:42.999 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:42.999 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:42.999 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:42.999 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:42.999 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:42.999 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:42.999 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:42.999 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:42.999 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:42.999 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:42.999 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:42.999 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:42.999 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:42.999 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:42.999 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:42.999 list of standard malloc elements. size: 199.218079 MiB 00:04:42.999 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:42.999 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:42.999 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:42.999 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:42.999 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:42.999 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.999 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:42.999 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.999 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:42.999 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.999 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:42.999 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:42.999 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:42.999 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:42.999 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:42.999 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:42.999 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:42.999 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:42.999 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:42.999 list of memzone associated elements. size: 602.262573 MiB 00:04:42.999 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:42.999 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.999 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:42.999 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.999 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:43.000 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4031171_0 00:04:43.000 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:43.000 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4031171_0 00:04:43.000 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:43.000 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4031171_0 00:04:43.000 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:43.000 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.000 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:43.000 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.000 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:43.000 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4031171 00:04:43.000 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:43.000 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4031171 00:04:43.000 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:43.000 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4031171 00:04:43.000 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:43.000 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.000 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:43.000 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.000 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:43.000 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.000 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:43.000 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.000 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:43.000 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4031171 00:04:43.000 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:43.000 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4031171 00:04:43.000 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:43.000 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4031171 00:04:43.000 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:43.000 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4031171 00:04:43.000 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:43.000 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4031171 00:04:43.000 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:43.000 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.000 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:43.000 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.000 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:43.000 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.000 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:43.000 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4031171 00:04:43.000 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:43.000 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.000 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:43.000 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.000 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:43.000 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4031171 00:04:43.000 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:43.000 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.000 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:43.000 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4031171 00:04:43.000 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:43.000 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4031171 00:04:43.000 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:43.000 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.000 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.000 00:57:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4031171 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 4031171 ']' 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 4031171 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4031171 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4031171' 00:04:43.000 killing process with pid 4031171 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 4031171 00:04:43.000 00:57:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 4031171 00:04:43.565 00:04:43.565 real 0m1.059s 00:04:43.565 user 0m1.020s 00:04:43.565 sys 0m0.382s 00:04:43.565 00:57:59 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.565 00:57:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.565 ************************************ 00:04:43.565 END TEST dpdk_mem_utility 00:04:43.565 ************************************ 00:04:43.565 00:57:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.566 00:57:59 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.566 00:57:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.566 00:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.566 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.566 ************************************ 00:04:43.566 START TEST event 00:04:43.566 ************************************ 00:04:43.566 00:57:59 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.566 * Looking for test storage... 00:04:43.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:43.566 00:57:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:43.566 00:57:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.566 00:57:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.566 00:57:59 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:43.566 00:57:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.566 00:57:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.566 ************************************ 00:04:43.566 START TEST event_perf 00:04:43.566 ************************************ 00:04:43.566 00:57:59 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.566 Running I/O for 1 seconds...[2024-07-16 00:57:59.442878] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:43.566 [2024-07-16 00:57:59.442944] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031359 ] 00:04:43.566 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.566 [2024-07-16 00:57:59.499447] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.824 [2024-07-16 00:57:59.602674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.824 [2024-07-16 00:57:59.602772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.824 [2024-07-16 00:57:59.602860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.824 [2024-07-16 00:57:59.602868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.759 Running I/O for 1 seconds... 00:04:44.759 lcore 0: 225926 00:04:44.759 lcore 1: 225924 00:04:44.759 lcore 2: 225923 00:04:44.759 lcore 3: 225924 00:04:44.759 done. 00:04:44.759 00:04:44.759 real 0m1.286s 00:04:44.759 user 0m4.196s 00:04:44.759 sys 0m0.084s 00:04:44.759 00:58:00 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.759 00:58:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.759 ************************************ 00:04:44.759 END TEST event_perf 00:04:44.759 ************************************ 00:04:44.759 00:58:00 event -- common/autotest_common.sh@1142 -- # return 0 00:04:44.759 00:58:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:44.759 00:58:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:44.759 00:58:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.759 00:58:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.016 ************************************ 00:04:45.016 START TEST event_reactor 00:04:45.016 ************************************ 00:04:45.017 00:58:00 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:45.017 [2024-07-16 00:58:00.767647] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:45.017 [2024-07-16 00:58:00.767696] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031514 ] 00:04:45.017 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.017 [2024-07-16 00:58:00.823444] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.017 [2024-07-16 00:58:00.926631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.387 test_start 00:04:46.387 oneshot 00:04:46.387 tick 100 00:04:46.387 tick 100 00:04:46.387 tick 250 00:04:46.387 tick 100 00:04:46.387 tick 100 00:04:46.387 tick 100 00:04:46.387 tick 250 00:04:46.387 tick 500 00:04:46.387 tick 100 00:04:46.387 tick 100 00:04:46.387 tick 250 00:04:46.387 tick 100 00:04:46.387 tick 100 00:04:46.387 test_end 00:04:46.387 00:04:46.387 real 0m1.279s 00:04:46.387 user 0m1.190s 00:04:46.387 sys 0m0.085s 00:04:46.387 00:58:02 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.387 00:58:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.387 ************************************ 00:04:46.387 END TEST event_reactor 00:04:46.387 ************************************ 00:04:46.387 00:58:02 event -- common/autotest_common.sh@1142 -- # return 0 00:04:46.387 00:58:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.387 00:58:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:46.387 00:58:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.387 00:58:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.387 ************************************ 00:04:46.387 START TEST event_reactor_perf 00:04:46.387 ************************************ 00:04:46.387 00:58:02 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.387 [2024-07-16 00:58:02.098237] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:46.387 [2024-07-16 00:58:02.098318] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031668 ] 00:04:46.387 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.387 [2024-07-16 00:58:02.156855] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.387 [2024-07-16 00:58:02.264286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.758 test_start 00:04:47.758 test_end 00:04:47.758 Performance: 449190 events per second 00:04:47.758 00:04:47.758 real 0m1.290s 00:04:47.758 user 0m1.210s 00:04:47.758 sys 0m0.075s 00:04:47.758 00:58:03 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.758 00:58:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.758 ************************************ 00:04:47.758 END TEST event_reactor_perf 00:04:47.758 ************************************ 00:04:47.758 00:58:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:47.758 00:58:03 event -- event/event.sh@49 -- # uname -s 00:04:47.758 00:58:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.758 00:58:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.758 00:58:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.758 00:58:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.758 00:58:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.758 ************************************ 00:04:47.758 START TEST event_scheduler 00:04:47.758 ************************************ 00:04:47.758 00:58:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.758 * Looking for test storage... 00:04:47.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:47.758 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.758 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4031856 00:04:47.758 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.758 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.758 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4031856 00:04:47.758 00:58:03 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 4031856 ']' 00:04:47.758 00:58:03 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.758 00:58:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.759 [2024-07-16 00:58:03.522903] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:47.759 [2024-07-16 00:58:03.523010] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031856 ] 00:04:47.759 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.759 [2024-07-16 00:58:03.579788] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.759 [2024-07-16 00:58:03.689170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.759 [2024-07-16 00:58:03.689224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.759 [2024-07-16 00:58:03.689288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.759 [2024-07-16 00:58:03.689291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:47.759 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.759 [2024-07-16 00:58:03.734073] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:47.759 [2024-07-16 00:58:03.734102] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:47.759 [2024-07-16 00:58:03.734119] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:47.759 [2024-07-16 00:58:03.734130] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:47.759 [2024-07-16 00:58:03.734141] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.759 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.759 00:58:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 [2024-07-16 00:58:03.825186] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.047 00:58:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.047 00:58:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.047 00:58:03 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.047 00:58:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.047 00:58:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 ************************************ 00:04:48.047 START TEST scheduler_create_thread 00:04:48.047 ************************************ 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 2 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 3 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.047 4 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.047 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 5 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 6 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 7 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 8 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 9 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 10 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.048 00:58:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.419 00:58:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.419 00:04:49.419 real 0m1.170s 00:04:49.419 user 0m0.009s 00:04:49.419 sys 0m0.004s 00:04:49.419 00:58:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.419 00:58:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.419 ************************************ 00:04:49.419 END TEST scheduler_create_thread 00:04:49.419 ************************************ 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:49.419 00:58:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.419 00:58:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4031856 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 4031856 ']' 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 4031856 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4031856 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4031856' 00:04:49.419 killing process with pid 4031856 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 4031856 00:04:49.419 00:58:05 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 4031856 00:04:49.676 [2024-07-16 00:58:05.502888] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.934 00:04:49.934 real 0m2.324s 00:04:49.934 user 0m2.642s 00:04:49.934 sys 0m0.326s 00:04:49.934 00:58:05 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.934 00:58:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.934 ************************************ 00:04:49.934 END TEST event_scheduler 00:04:49.934 ************************************ 00:04:49.934 00:58:05 event -- common/autotest_common.sh@1142 -- # return 0 00:04:49.934 00:58:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.934 00:58:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.934 00:58:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.934 00:58:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.934 00:58:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.934 ************************************ 00:04:49.934 START TEST app_repeat 00:04:49.934 ************************************ 00:04:49.934 00:58:05 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4032175 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4032175' 00:04:49.934 Process app_repeat pid: 4032175 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.934 spdk_app_start Round 0 00:04:49.934 00:58:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4032175 /var/tmp/spdk-nbd.sock 00:04:49.934 00:58:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4032175 ']' 00:04:49.934 00:58:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.934 00:58:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.934 00:58:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.934 00:58:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.934 00:58:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.934 [2024-07-16 00:58:05.835813] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:04:49.934 [2024-07-16 00:58:05.835879] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032175 ] 00:04:49.934 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.934 [2024-07-16 00:58:05.895420] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.192 [2024-07-16 00:58:06.001826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.192 [2024-07-16 00:58:06.001829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.192 00:58:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.192 00:58:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:50.192 00:58:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.450 Malloc0 00:04:50.450 00:58:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.707 Malloc1 00:04:50.707 00:58:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.707 00:58:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.965 /dev/nbd0 00:04:50.965 00:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.965 00:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.965 1+0 records in 00:04:50.965 1+0 records out 00:04:50.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000136161 s, 30.1 MB/s 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:50.965 00:58:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:50.965 00:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.965 00:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.965 00:58:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.223 /dev/nbd1 00:04:51.223 00:58:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.223 00:58:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.223 1+0 records in 00:04:51.223 1+0 records out 00:04:51.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181498 s, 22.6 MB/s 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:51.223 00:58:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:51.223 00:58:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.223 00:58:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.223 00:58:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.223 00:58:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.224 00:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.482 00:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.482 { 00:04:51.482 "nbd_device": "/dev/nbd0", 00:04:51.482 "bdev_name": "Malloc0" 00:04:51.482 }, 00:04:51.482 { 00:04:51.482 "nbd_device": "/dev/nbd1", 00:04:51.482 "bdev_name": "Malloc1" 00:04:51.482 } 00:04:51.482 ]' 00:04:51.482 00:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.482 { 00:04:51.482 "nbd_device": "/dev/nbd0", 00:04:51.482 "bdev_name": "Malloc0" 00:04:51.482 }, 00:04:51.482 { 00:04:51.482 "nbd_device": "/dev/nbd1", 00:04:51.482 "bdev_name": "Malloc1" 00:04:51.482 } 00:04:51.482 ]' 00:04:51.482 00:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.482 00:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.482 /dev/nbd1' 00:04:51.482 00:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.482 /dev/nbd1' 00:04:51.482 00:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.740 256+0 records in 00:04:51.740 256+0 records out 00:04:51.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050128 s, 209 MB/s 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.740 256+0 records in 00:04:51.740 256+0 records out 00:04:51.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209557 s, 50.0 MB/s 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.740 256+0 records in 00:04:51.740 256+0 records out 00:04:51.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224947 s, 46.6 MB/s 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.740 00:58:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.998 00:58:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.257 00:58:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.515 00:58:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.515 00:58:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.773 00:58:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.032 [2024-07-16 00:58:08.892692] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.032 [2024-07-16 00:58:08.991305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.032 [2024-07-16 00:58:08.991305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.290 [2024-07-16 00:58:09.041115] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.290 [2024-07-16 00:58:09.041179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.818 00:58:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.818 00:58:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:55.818 spdk_app_start Round 1 00:04:55.818 00:58:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4032175 /var/tmp/spdk-nbd.sock 00:04:55.818 00:58:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4032175 ']' 00:04:55.818 00:58:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.818 00:58:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.818 00:58:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.818 00:58:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.818 00:58:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 00:58:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.076 00:58:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:56.076 00:58:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.334 Malloc0 00:04:56.334 00:58:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.592 Malloc1 00:04:56.592 00:58:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.592 00:58:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.850 /dev/nbd0 00:04:56.850 00:58:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.850 00:58:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.850 1+0 records in 00:04:56.850 1+0 records out 00:04:56.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153503 s, 26.7 MB/s 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:56.850 00:58:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:56.850 00:58:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.850 00:58:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.850 00:58:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.108 /dev/nbd1 00:04:57.108 00:58:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.108 00:58:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.108 1+0 records in 00:04:57.108 1+0 records out 00:04:57.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245521 s, 16.7 MB/s 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.108 00:58:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:57.109 00:58:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.109 00:58:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:57.109 00:58:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:57.109 00:58:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.109 00:58:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.109 00:58:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.109 00:58:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.109 00:58:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.366 { 00:04:57.366 "nbd_device": "/dev/nbd0", 00:04:57.366 "bdev_name": "Malloc0" 00:04:57.366 }, 00:04:57.366 { 00:04:57.366 "nbd_device": "/dev/nbd1", 00:04:57.366 "bdev_name": "Malloc1" 00:04:57.366 } 00:04:57.366 ]' 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.366 { 00:04:57.366 "nbd_device": "/dev/nbd0", 00:04:57.366 "bdev_name": "Malloc0" 00:04:57.366 }, 00:04:57.366 { 00:04:57.366 "nbd_device": "/dev/nbd1", 00:04:57.366 "bdev_name": "Malloc1" 00:04:57.366 } 00:04:57.366 ]' 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.366 /dev/nbd1' 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.366 /dev/nbd1' 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.366 00:58:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.367 256+0 records in 00:04:57.367 256+0 records out 00:04:57.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513734 s, 204 MB/s 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.367 256+0 records in 00:04:57.367 256+0 records out 00:04:57.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205605 s, 51.0 MB/s 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.367 256+0 records in 00:04:57.367 256+0 records out 00:04:57.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224916 s, 46.6 MB/s 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.367 00:58:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.625 00:58:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.883 00:58:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.140 00:58:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.140 00:58:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.140 00:58:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.140 00:58:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.140 00:58:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.140 00:58:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.396 00:58:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.396 00:58:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.653 00:58:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.910 [2024-07-16 00:58:14.689042] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.910 [2024-07-16 00:58:14.789589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.910 [2024-07-16 00:58:14.789593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.910 [2024-07-16 00:58:14.847435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.910 [2024-07-16 00:58:14.847511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.188 00:58:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.188 00:58:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.188 spdk_app_start Round 2 00:05:02.188 00:58:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4032175 /var/tmp/spdk-nbd.sock 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4032175 ']' 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.188 00:58:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:02.188 00:58:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.188 Malloc0 00:05:02.188 00:58:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.445 Malloc1 00:05:02.445 00:58:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.445 00:58:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.702 /dev/nbd0 00:05:02.702 00:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.702 00:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.702 1+0 records in 00:05:02.702 1+0 records out 00:05:02.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198241 s, 20.7 MB/s 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.702 00:58:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.702 00:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.702 00:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.702 00:58:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.004 /dev/nbd1 00:05:03.004 00:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.004 00:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.004 1+0 records in 00:05:03.004 1+0 records out 00:05:03.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018679 s, 21.9 MB/s 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:03.004 00:58:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:03.004 00:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.004 00:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.004 00:58:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.004 00:58:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.004 00:58:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.262 { 00:05:03.262 "nbd_device": "/dev/nbd0", 00:05:03.262 "bdev_name": "Malloc0" 00:05:03.262 }, 00:05:03.262 { 00:05:03.262 "nbd_device": "/dev/nbd1", 00:05:03.262 "bdev_name": "Malloc1" 00:05:03.262 } 00:05:03.262 ]' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.262 { 00:05:03.262 "nbd_device": "/dev/nbd0", 00:05:03.262 "bdev_name": "Malloc0" 00:05:03.262 }, 00:05:03.262 { 00:05:03.262 "nbd_device": "/dev/nbd1", 00:05:03.262 "bdev_name": "Malloc1" 00:05:03.262 } 00:05:03.262 ]' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.262 /dev/nbd1' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.262 /dev/nbd1' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.262 256+0 records in 00:05:03.262 256+0 records out 00:05:03.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385284 s, 272 MB/s 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.262 256+0 records in 00:05:03.262 256+0 records out 00:05:03.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210614 s, 49.8 MB/s 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.262 256+0 records in 00:05:03.262 256+0 records out 00:05:03.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226964 s, 46.2 MB/s 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.262 00:58:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.519 00:58:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.776 00:58:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.034 00:58:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.034 00:58:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.291 00:58:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.548 [2024-07-16 00:58:20.482717] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.806 [2024-07-16 00:58:20.585683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.806 [2024-07-16 00:58:20.585683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.806 [2024-07-16 00:58:20.638658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.806 [2024-07-16 00:58:20.638723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.333 00:58:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4032175 /var/tmp/spdk-nbd.sock 00:05:07.333 00:58:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4032175 ']' 00:05:07.333 00:58:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.333 00:58:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.333 00:58:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.333 00:58:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.333 00:58:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.655 00:58:23 event.app_repeat -- event/event.sh@39 -- # killprocess 4032175 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 4032175 ']' 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 4032175 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4032175 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4032175' 00:05:07.655 killing process with pid 4032175 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@967 -- # kill 4032175 00:05:07.655 00:58:23 event.app_repeat -- common/autotest_common.sh@972 -- # wait 4032175 00:05:07.914 spdk_app_start is called in Round 0. 00:05:07.914 Shutdown signal received, stop current app iteration 00:05:07.914 Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 reinitialization... 00:05:07.914 spdk_app_start is called in Round 1. 00:05:07.914 Shutdown signal received, stop current app iteration 00:05:07.914 Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 reinitialization... 00:05:07.914 spdk_app_start is called in Round 2. 00:05:07.914 Shutdown signal received, stop current app iteration 00:05:07.914 Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 reinitialization... 00:05:07.914 spdk_app_start is called in Round 3. 00:05:07.914 Shutdown signal received, stop current app iteration 00:05:07.914 00:58:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.914 00:58:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.914 00:05:07.914 real 0m17.914s 00:05:07.914 user 0m38.852s 00:05:07.914 sys 0m3.227s 00:05:07.914 00:58:23 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.914 00:58:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.914 ************************************ 00:05:07.914 END TEST app_repeat 00:05:07.914 ************************************ 00:05:07.914 00:58:23 event -- common/autotest_common.sh@1142 -- # return 0 00:05:07.914 00:58:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.914 00:58:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.914 00:58:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.914 00:58:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.914 00:58:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.914 ************************************ 00:05:07.914 START TEST cpu_locks 00:05:07.914 ************************************ 00:05:07.914 00:58:23 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.914 * Looking for test storage... 00:05:07.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:07.914 00:58:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.914 00:58:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.914 00:58:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.914 00:58:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.914 00:58:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.914 00:58:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.914 00:58:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.914 ************************************ 00:05:07.914 START TEST default_locks 00:05:07.914 ************************************ 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4034527 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4034527 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4034527 ']' 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.914 00:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.172 [2024-07-16 00:58:23.914121] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:08.172 [2024-07-16 00:58:23.914215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034527 ] 00:05:08.172 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.172 [2024-07-16 00:58:23.970573] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.172 [2024-07-16 00:58:24.079598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.429 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.429 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:08.429 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4034527 00:05:08.429 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4034527 00:05:08.429 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.687 lslocks: write error 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4034527 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 4034527 ']' 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 4034527 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034527 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034527' 00:05:08.687 killing process with pid 4034527 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 4034527 00:05:08.687 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 4034527 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4034527 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4034527 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4034527 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4034527 ']' 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4034527) - No such process 00:05:09.250 ERROR: process (pid: 4034527) is no longer running 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.250 00:05:09.250 real 0m1.109s 00:05:09.250 user 0m1.050s 00:05:09.250 sys 0m0.499s 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.250 00:58:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.250 ************************************ 00:05:09.250 END TEST default_locks 00:05:09.250 ************************************ 00:05:09.250 00:58:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:09.250 00:58:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:09.250 00:58:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.250 00:58:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.250 00:58:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.250 ************************************ 00:05:09.250 START TEST default_locks_via_rpc 00:05:09.250 ************************************ 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4034689 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4034689 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4034689 ']' 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.250 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.250 [2024-07-16 00:58:25.074564] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:09.250 [2024-07-16 00:58:25.074652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034689 ] 00:05:09.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.250 [2024-07-16 00:58:25.131529] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.250 [2024-07-16 00:58:25.240158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4034689 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4034689 00:05:09.508 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4034689 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 4034689 ']' 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 4034689 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034689 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034689' 00:05:10.073 killing process with pid 4034689 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 4034689 00:05:10.073 00:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 4034689 00:05:10.331 00:05:10.331 real 0m1.220s 00:05:10.331 user 0m1.182s 00:05:10.331 sys 0m0.488s 00:05:10.331 00:58:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.331 00:58:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.331 ************************************ 00:05:10.331 END TEST default_locks_via_rpc 00:05:10.331 ************************************ 00:05:10.331 00:58:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.331 00:58:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:10.331 00:58:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.331 00:58:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.331 00:58:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.331 ************************************ 00:05:10.331 START TEST non_locking_app_on_locked_coremask 00:05:10.331 ************************************ 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4034888 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4034888 /var/tmp/spdk.sock 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4034888 ']' 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.331 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.588 [2024-07-16 00:58:26.344772] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:10.589 [2024-07-16 00:58:26.344865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034888 ] 00:05:10.589 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.589 [2024-07-16 00:58:26.403616] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.589 [2024-07-16 00:58:26.503840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4034980 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4034980 /var/tmp/spdk2.sock 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4034980 ']' 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.846 00:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.846 [2024-07-16 00:58:26.796518] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:10.846 [2024-07-16 00:58:26.796613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034980 ] 00:05:10.846 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.104 [2024-07-16 00:58:26.878198] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.104 [2024-07-16 00:58:26.878252] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.104 [2024-07-16 00:58:27.093702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.035 00:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.035 00:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.035 00:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4034888 00:05:12.035 00:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4034888 00:05:12.035 00:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.291 lslocks: write error 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4034888 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4034888 ']' 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4034888 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034888 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034888' 00:05:12.291 killing process with pid 4034888 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4034888 00:05:12.291 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4034888 00:05:13.220 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4034980 00:05:13.220 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4034980 ']' 00:05:13.220 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4034980 00:05:13.220 00:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.220 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.220 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034980 00:05:13.220 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.220 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.220 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034980' 00:05:13.220 killing process with pid 4034980 00:05:13.220 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4034980 00:05:13.220 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4034980 00:05:13.511 00:05:13.511 real 0m3.151s 00:05:13.511 user 0m3.320s 00:05:13.511 sys 0m0.989s 00:05:13.511 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.511 00:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.511 ************************************ 00:05:13.511 END TEST non_locking_app_on_locked_coremask 00:05:13.511 ************************************ 00:05:13.512 00:58:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.512 00:58:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:13.512 00:58:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.512 00:58:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.512 00:58:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.512 ************************************ 00:05:13.512 START TEST locking_app_on_unlocked_coremask 00:05:13.512 ************************************ 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4035285 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4035285 /var/tmp/spdk.sock 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4035285 ']' 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.512 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.769 [2024-07-16 00:58:29.549571] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:13.769 [2024-07-16 00:58:29.549666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035285 ] 00:05:13.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.769 [2024-07-16 00:58:29.606977] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.769 [2024-07-16 00:58:29.607007] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.769 [2024-07-16 00:58:29.708554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4035410 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4035410 /var/tmp/spdk2.sock 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4035410 ']' 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.026 00:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.026 [2024-07-16 00:58:30.006554] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:14.026 [2024-07-16 00:58:30.006691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035410 ] 00:05:14.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.282 [2024-07-16 00:58:30.093044] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.539 [2024-07-16 00:58:30.301166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.119 00:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.119 00:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:15.119 00:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4035410 00:05:15.119 00:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4035410 00:05:15.119 00:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.684 lslocks: write error 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4035285 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4035285 ']' 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4035285 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4035285 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4035285' 00:05:15.684 killing process with pid 4035285 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4035285 00:05:15.684 00:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4035285 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4035410 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4035410 ']' 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4035410 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4035410 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4035410' 00:05:16.617 killing process with pid 4035410 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4035410 00:05:16.617 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4035410 00:05:16.875 00:05:16.875 real 0m3.258s 00:05:16.875 user 0m3.430s 00:05:16.875 sys 0m1.039s 00:05:16.875 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.875 00:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.875 ************************************ 00:05:16.875 END TEST locking_app_on_unlocked_coremask 00:05:16.876 ************************************ 00:05:16.876 00:58:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:16.876 00:58:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:16.876 00:58:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.876 00:58:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.876 00:58:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.876 ************************************ 00:05:16.876 START TEST locking_app_on_locked_coremask 00:05:16.876 ************************************ 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4035725 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4035725 /var/tmp/spdk.sock 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4035725 ']' 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.876 00:58:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.876 [2024-07-16 00:58:32.862966] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:16.876 [2024-07-16 00:58:32.863064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035725 ] 00:05:17.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.134 [2024-07-16 00:58:32.920868] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.134 [2024-07-16 00:58:33.030224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4035843 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4035843 /var/tmp/spdk2.sock 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4035843 /var/tmp/spdk2.sock 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4035843 /var/tmp/spdk2.sock 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4035843 ']' 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.391 00:58:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.391 [2024-07-16 00:58:33.321702] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:17.391 [2024-07-16 00:58:33.321777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035843 ] 00:05:17.391 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.648 [2024-07-16 00:58:33.405701] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4035725 has claimed it. 00:05:17.648 [2024-07-16 00:58:33.405751] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:18.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4035843) - No such process 00:05:18.213 ERROR: process (pid: 4035843) is no longer running 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4035725 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4035725 00:05:18.213 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.471 lslocks: write error 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4035725 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4035725 ']' 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4035725 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4035725 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4035725' 00:05:18.471 killing process with pid 4035725 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4035725 00:05:18.471 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4035725 00:05:19.037 00:05:19.037 real 0m1.975s 00:05:19.037 user 0m2.143s 00:05:19.037 sys 0m0.611s 00:05:19.037 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.037 00:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.037 ************************************ 00:05:19.037 END TEST locking_app_on_locked_coremask 00:05:19.037 ************************************ 00:05:19.037 00:58:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:19.037 00:58:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:19.037 00:58:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.037 00:58:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.037 00:58:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.037 ************************************ 00:05:19.037 START TEST locking_overlapped_coremask 00:05:19.037 ************************************ 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4036022 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4036022 /var/tmp/spdk.sock 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4036022 ']' 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.037 00:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.038 00:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.038 [2024-07-16 00:58:34.885479] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:19.038 [2024-07-16 00:58:34.885568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036022 ] 00:05:19.038 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.038 [2024-07-16 00:58:34.942447] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.296 [2024-07-16 00:58:35.054255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.296 [2024-07-16 00:58:35.054311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.296 [2024-07-16 00:58:35.054315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4036043 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4036043 /var/tmp/spdk2.sock 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4036043 /var/tmp/spdk2.sock 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4036043 /var/tmp/spdk2.sock 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4036043 ']' 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.554 00:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.554 [2024-07-16 00:58:35.361880] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:19.554 [2024-07-16 00:58:35.361981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036043 ] 00:05:19.554 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.554 [2024-07-16 00:58:35.456035] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4036022 has claimed it. 00:05:19.554 [2024-07-16 00:58:35.456097] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:20.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4036043) - No such process 00:05:20.120 ERROR: process (pid: 4036043) is no longer running 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4036022 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 4036022 ']' 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 4036022 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4036022 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4036022' 00:05:20.120 killing process with pid 4036022 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 4036022 00:05:20.120 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 4036022 00:05:20.685 00:05:20.685 real 0m1.690s 00:05:20.685 user 0m4.507s 00:05:20.685 sys 0m0.459s 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.685 ************************************ 00:05:20.685 END TEST locking_overlapped_coremask 00:05:20.685 ************************************ 00:05:20.685 00:58:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.685 00:58:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:20.685 00:58:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.685 00:58:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.685 00:58:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.685 ************************************ 00:05:20.685 START TEST locking_overlapped_coremask_via_rpc 00:05:20.685 ************************************ 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4036305 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4036305 /var/tmp/spdk.sock 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4036305 ']' 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.685 00:58:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.685 [2024-07-16 00:58:36.626821] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:20.685 [2024-07-16 00:58:36.626913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036305 ] 00:05:20.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.943 [2024-07-16 00:58:36.685023] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.943 [2024-07-16 00:58:36.685052] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.943 [2024-07-16 00:58:36.785775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.943 [2024-07-16 00:58:36.785882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.943 [2024-07-16 00:58:36.785887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4036325 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4036325 /var/tmp/spdk2.sock 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4036325 ']' 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.202 00:58:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.202 [2024-07-16 00:58:37.081338] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:21.202 [2024-07-16 00:58:37.081434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036325 ] 00:05:21.202 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.202 [2024-07-16 00:58:37.169735] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.202 [2024-07-16 00:58:37.169779] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.460 [2024-07-16 00:58:37.393198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.460 [2024-07-16 00:58:37.393255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:21.460 [2024-07-16 00:58:37.393258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.025 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.025 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:22.025 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.025 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.025 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.283 [2024-07-16 00:58:38.029077] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4036305 has claimed it. 00:05:22.283 request: 00:05:22.283 { 00:05:22.283 "method": "framework_enable_cpumask_locks", 00:05:22.283 "req_id": 1 00:05:22.283 } 00:05:22.283 Got JSON-RPC error response 00:05:22.283 response: 00:05:22.283 { 00:05:22.283 "code": -32603, 00:05:22.283 "message": "Failed to claim CPU core: 2" 00:05:22.283 } 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4036305 /var/tmp/spdk.sock 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4036305 ']' 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.283 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4036325 /var/tmp/spdk2.sock 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4036325 ']' 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.541 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.799 00:05:22.799 real 0m1.966s 00:05:22.799 user 0m1.000s 00:05:22.799 sys 0m0.197s 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.799 00:58:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.799 ************************************ 00:05:22.799 END TEST locking_overlapped_coremask_via_rpc 00:05:22.799 ************************************ 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:22.799 00:58:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:22.799 00:58:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4036305 ]] 00:05:22.799 00:58:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4036305 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4036305 ']' 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4036305 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4036305 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4036305' 00:05:22.799 killing process with pid 4036305 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4036305 00:05:22.799 00:58:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4036305 00:05:23.056 00:58:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4036325 ]] 00:05:23.056 00:58:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4036325 00:05:23.056 00:58:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4036325 ']' 00:05:23.056 00:58:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4036325 00:05:23.056 00:58:39 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:23.056 00:58:39 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.056 00:58:39 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4036325 00:05:23.313 00:58:39 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:23.313 00:58:39 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:23.313 00:58:39 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4036325' 00:05:23.313 killing process with pid 4036325 00:05:23.313 00:58:39 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4036325 00:05:23.313 00:58:39 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4036325 00:05:23.571 00:58:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.571 00:58:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:23.571 00:58:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4036305 ]] 00:05:23.571 00:58:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4036305 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4036305 ']' 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4036305 00:05:23.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4036305) - No such process 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4036305 is not found' 00:05:23.571 Process with pid 4036305 is not found 00:05:23.571 00:58:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4036325 ]] 00:05:23.571 00:58:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4036325 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4036325 ']' 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4036325 00:05:23.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4036325) - No such process 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4036325 is not found' 00:05:23.571 Process with pid 4036325 is not found 00:05:23.571 00:58:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.571 00:05:23.571 real 0m15.711s 00:05:23.571 user 0m27.442s 00:05:23.571 sys 0m5.189s 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.571 00:58:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 ************************************ 00:05:23.571 END TEST cpu_locks 00:05:23.571 ************************************ 00:05:23.571 00:58:39 event -- common/autotest_common.sh@1142 -- # return 0 00:05:23.571 00:05:23.571 real 0m40.172s 00:05:23.571 user 1m15.664s 00:05:23.571 sys 0m9.245s 00:05:23.571 00:58:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.571 00:58:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 ************************************ 00:05:23.571 END TEST event 00:05:23.571 ************************************ 00:05:23.571 00:58:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.571 00:58:39 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:23.571 00:58:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.571 00:58:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.571 00:58:39 -- common/autotest_common.sh@10 -- # set +x 00:05:23.829 ************************************ 00:05:23.829 START TEST thread 00:05:23.829 ************************************ 00:05:23.829 00:58:39 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:23.829 * Looking for test storage... 00:05:23.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:23.829 00:58:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.829 00:58:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:23.829 00:58:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.829 00:58:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.829 ************************************ 00:05:23.829 START TEST thread_poller_perf 00:05:23.829 ************************************ 00:05:23.829 00:58:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.829 [2024-07-16 00:58:39.664335] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:23.829 [2024-07-16 00:58:39.664410] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036689 ] 00:05:23.829 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.829 [2024-07-16 00:58:39.723562] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.088 [2024-07-16 00:58:39.833672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.088 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:25.057 ====================================== 00:05:25.057 busy:2713816545 (cyc) 00:05:25.057 total_run_count: 363000 00:05:25.057 tsc_hz: 2700000000 (cyc) 00:05:25.057 ====================================== 00:05:25.057 poller_cost: 7476 (cyc), 2768 (nsec) 00:05:25.057 00:05:25.057 real 0m1.299s 00:05:25.057 user 0m1.213s 00:05:25.057 sys 0m0.081s 00:05:25.057 00:58:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.057 00:58:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.057 ************************************ 00:05:25.057 END TEST thread_poller_perf 00:05:25.057 ************************************ 00:05:25.057 00:58:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:25.057 00:58:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.057 00:58:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:25.057 00:58:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.057 00:58:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.057 ************************************ 00:05:25.057 START TEST thread_poller_perf 00:05:25.057 ************************************ 00:05:25.057 00:58:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.057 [2024-07-16 00:58:41.013622] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:25.057 [2024-07-16 00:58:41.013695] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036866 ] 00:05:25.057 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.315 [2024-07-16 00:58:41.075348] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.315 [2024-07-16 00:58:41.179908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.315 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:26.684 ====================================== 00:05:26.684 busy:2702219202 (cyc) 00:05:26.684 total_run_count: 4878000 00:05:26.684 tsc_hz: 2700000000 (cyc) 00:05:26.684 ====================================== 00:05:26.684 poller_cost: 553 (cyc), 204 (nsec) 00:05:26.684 00:05:26.684 real 0m1.290s 00:05:26.684 user 0m1.205s 00:05:26.684 sys 0m0.079s 00:05:26.684 00:58:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.684 00:58:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.684 ************************************ 00:05:26.684 END TEST thread_poller_perf 00:05:26.684 ************************************ 00:05:26.684 00:58:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:26.684 00:58:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:26.684 00:05:26.684 real 0m2.750s 00:05:26.684 user 0m2.485s 00:05:26.684 sys 0m0.266s 00:05:26.684 00:58:42 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.684 00:58:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.684 ************************************ 00:05:26.684 END TEST thread 00:05:26.684 ************************************ 00:05:26.684 00:58:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.684 00:58:42 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:26.684 00:58:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.684 00:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.684 00:58:42 -- common/autotest_common.sh@10 -- # set +x 00:05:26.684 ************************************ 00:05:26.684 START TEST accel 00:05:26.684 ************************************ 00:05:26.684 00:58:42 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:26.684 * Looking for test storage... 00:05:26.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:26.684 00:58:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:26.684 00:58:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:26.684 00:58:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.684 00:58:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4037157 00:05:26.684 00:58:42 accel -- accel/accel.sh@63 -- # waitforlisten 4037157 00:05:26.684 00:58:42 accel -- common/autotest_common.sh@829 -- # '[' -z 4037157 ']' 00:05:26.684 00:58:42 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.684 00:58:42 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:26.684 00:58:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:26.684 00:58:42 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.684 00:58:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.684 00:58:42 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.684 00:58:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.684 00:58:42 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.684 00:58:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.684 00:58:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.684 00:58:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.684 00:58:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.684 00:58:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:26.684 00:58:42 accel -- accel/accel.sh@41 -- # jq -r . 00:05:26.684 [2024-07-16 00:58:42.473032] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:26.684 [2024-07-16 00:58:42.473113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037157 ] 00:05:26.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.684 [2024-07-16 00:58:42.530091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.684 [2024-07-16 00:58:42.639250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.940 00:58:42 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.940 00:58:42 accel -- common/autotest_common.sh@862 -- # return 0 00:05:26.940 00:58:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:26.940 00:58:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:26.940 00:58:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:26.940 00:58:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:26.940 00:58:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:26.940 00:58:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:26.940 00:58:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.940 00:58:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.940 00:58:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:26.940 00:58:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.940 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.940 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.940 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.940 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.940 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.940 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.940 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.940 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:26.941 00:58:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:26.941 00:58:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:26.941 00:58:42 accel -- accel/accel.sh@75 -- # killprocess 4037157 00:05:26.941 00:58:42 accel -- common/autotest_common.sh@948 -- # '[' -z 4037157 ']' 00:05:26.941 00:58:42 accel -- common/autotest_common.sh@952 -- # kill -0 4037157 00:05:26.941 00:58:42 accel -- common/autotest_common.sh@953 -- # uname 00:05:26.941 00:58:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.941 00:58:42 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4037157 00:05:27.197 00:58:42 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.197 00:58:42 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.197 00:58:42 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4037157' 00:05:27.197 killing process with pid 4037157 00:05:27.197 00:58:42 accel -- common/autotest_common.sh@967 -- # kill 4037157 00:05:27.197 00:58:42 accel -- common/autotest_common.sh@972 -- # wait 4037157 00:05:27.454 00:58:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:27.454 00:58:43 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:27.454 00:58:43 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:27.454 00:58:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.454 00:58:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.454 00:58:43 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:27.454 00:58:43 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:27.454 00:58:43 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.454 00:58:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:27.711 00:58:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.711 00:58:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:27.711 00:58:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:27.711 00:58:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.711 00:58:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.711 ************************************ 00:05:27.711 START TEST accel_missing_filename 00:05:27.711 ************************************ 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.711 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:27.711 00:58:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:27.711 [2024-07-16 00:58:43.496311] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:27.711 [2024-07-16 00:58:43.496374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037327 ] 00:05:27.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.711 [2024-07-16 00:58:43.552968] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.711 [2024-07-16 00:58:43.659560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.967 [2024-07-16 00:58:43.717013] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.967 [2024-07-16 00:58:43.790755] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:27.967 A filename is required. 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.967 00:05:27.967 real 0m0.417s 00:05:27.967 user 0m0.307s 00:05:27.967 sys 0m0.144s 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.967 00:58:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:27.967 ************************************ 00:05:27.967 END TEST accel_missing_filename 00:05:27.967 ************************************ 00:05:27.967 00:58:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.967 00:58:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:27.967 00:58:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:27.967 00:58:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.967 00:58:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.967 ************************************ 00:05:27.967 START TEST accel_compress_verify 00:05:27.967 ************************************ 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.967 00:58:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:27.967 00:58:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:27.967 [2024-07-16 00:58:43.958651] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:27.967 [2024-07-16 00:58:43.958710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037364 ] 00:05:28.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.224 [2024-07-16 00:58:44.014922] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.224 [2024-07-16 00:58:44.118923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.224 [2024-07-16 00:58:44.173913] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:28.481 [2024-07-16 00:58:44.254736] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:28.481 00:05:28.481 Compression does not support the verify option, aborting. 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.481 00:05:28.481 real 0m0.425s 00:05:28.481 user 0m0.323s 00:05:28.481 sys 0m0.134s 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.481 00:58:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:28.481 ************************************ 00:05:28.481 END TEST accel_compress_verify 00:05:28.481 ************************************ 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.481 00:58:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.481 ************************************ 00:05:28.481 START TEST accel_wrong_workload 00:05:28.481 ************************************ 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:28.481 00:58:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:28.481 Unsupported workload type: foobar 00:05:28.481 [2024-07-16 00:58:44.437376] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:28.481 accel_perf options: 00:05:28.481 [-h help message] 00:05:28.481 [-q queue depth per core] 00:05:28.481 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:28.481 [-T number of threads per core 00:05:28.481 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:28.481 [-t time in seconds] 00:05:28.481 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:28.481 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:28.481 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:28.481 [-l for compress/decompress workloads, name of uncompressed input file 00:05:28.481 [-S for crc32c workload, use this seed value (default 0) 00:05:28.481 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:28.481 [-f for fill workload, use this BYTE value (default 255) 00:05:28.481 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:28.481 [-y verify result if this switch is on] 00:05:28.481 [-a tasks to allocate per core (default: same value as -q)] 00:05:28.481 Can be used to spread operations across a wider range of memory. 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.481 00:05:28.481 real 0m0.024s 00:05:28.481 user 0m0.011s 00:05:28.481 sys 0m0.013s 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.481 00:58:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:28.481 ************************************ 00:05:28.481 END TEST accel_wrong_workload 00:05:28.481 ************************************ 00:05:28.481 Error: writing output failed: Broken pipe 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.481 00:58:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.481 00:58:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.739 ************************************ 00:05:28.739 START TEST accel_negative_buffers 00:05:28.739 ************************************ 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.739 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:28.739 00:58:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:28.739 -x option must be non-negative. 00:05:28.739 [2024-07-16 00:58:44.507297] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:28.739 accel_perf options: 00:05:28.739 [-h help message] 00:05:28.740 [-q queue depth per core] 00:05:28.740 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:28.740 [-T number of threads per core 00:05:28.740 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:28.740 [-t time in seconds] 00:05:28.740 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:28.740 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:28.740 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:28.740 [-l for compress/decompress workloads, name of uncompressed input file 00:05:28.740 [-S for crc32c workload, use this seed value (default 0) 00:05:28.740 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:28.740 [-f for fill workload, use this BYTE value (default 255) 00:05:28.740 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:28.740 [-y verify result if this switch is on] 00:05:28.740 [-a tasks to allocate per core (default: same value as -q)] 00:05:28.740 Can be used to spread operations across a wider range of memory. 00:05:28.740 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:28.740 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.740 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.740 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.740 00:05:28.740 real 0m0.023s 00:05:28.740 user 0m0.012s 00:05:28.740 sys 0m0.011s 00:05:28.740 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.740 00:58:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:28.740 ************************************ 00:05:28.740 END TEST accel_negative_buffers 00:05:28.740 ************************************ 00:05:28.740 Error: writing output failed: Broken pipe 00:05:28.740 00:58:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.740 00:58:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:28.740 00:58:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:28.740 00:58:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.740 00:58:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.740 ************************************ 00:05:28.740 START TEST accel_crc32c 00:05:28.740 ************************************ 00:05:28.740 00:58:44 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:28.740 00:58:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:28.740 [2024-07-16 00:58:44.572867] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:28.740 [2024-07-16 00:58:44.572929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037538 ] 00:05:28.740 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.740 [2024-07-16 00:58:44.629947] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.999 [2024-07-16 00:58:44.739433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.999 00:58:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.370 00:58:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.370 00:58:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.370 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.370 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.370 00:58:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:30.371 00:58:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.371 00:05:30.371 real 0m1.433s 00:05:30.371 user 0m1.307s 00:05:30.371 sys 0m0.129s 00:05:30.371 00:58:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.371 00:58:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:30.371 ************************************ 00:05:30.371 END TEST accel_crc32c 00:05:30.371 ************************************ 00:05:30.371 00:58:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.371 00:58:46 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:30.371 00:58:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:30.371 00:58:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.371 00:58:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.371 ************************************ 00:05:30.371 START TEST accel_crc32c_C2 00:05:30.371 ************************************ 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:30.371 [2024-07-16 00:58:46.055496] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:30.371 [2024-07-16 00:58:46.055557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037699 ] 00:05:30.371 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.371 [2024-07-16 00:58:46.111725] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.371 [2024-07-16 00:58:46.214152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.371 00:58:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.742 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.743 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.743 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.743 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:31.743 00:58:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.743 00:05:31.743 real 0m1.432s 00:05:31.743 user 0m1.301s 00:05:31.743 sys 0m0.132s 00:05:31.743 00:58:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.743 00:58:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:31.743 ************************************ 00:05:31.743 END TEST accel_crc32c_C2 00:05:31.743 ************************************ 00:05:31.743 00:58:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.743 00:58:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:31.743 00:58:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.743 00:58:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.743 00:58:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.743 ************************************ 00:05:31.743 START TEST accel_copy 00:05:31.743 ************************************ 00:05:31.743 00:58:47 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:31.743 00:58:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:31.743 [2024-07-16 00:58:47.532932] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:31.743 [2024-07-16 00:58:47.533000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037858 ] 00:05:31.743 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.743 [2024-07-16 00:58:47.591427] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.743 [2024-07-16 00:58:47.695730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.001 00:58:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:33.375 00:58:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.375 00:05:33.375 real 0m1.428s 00:05:33.375 user 0m1.292s 00:05:33.375 sys 0m0.137s 00:05:33.375 00:58:48 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.375 00:58:48 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:33.375 ************************************ 00:05:33.375 END TEST accel_copy 00:05:33.375 ************************************ 00:05:33.375 00:58:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.375 00:58:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:33.375 00:58:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:33.375 00:58:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.375 00:58:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.375 ************************************ 00:05:33.375 START TEST accel_fill 00:05:33.375 ************************************ 00:05:33.375 00:58:48 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:33.375 00:58:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:33.375 [2024-07-16 00:58:49.005737] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:33.375 [2024-07-16 00:58:49.005814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038070 ] 00:05:33.375 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.375 [2024-07-16 00:58:49.064170] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.375 [2024-07-16 00:58:49.170392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.375 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.376 00:58:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:34.747 00:58:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.747 00:05:34.747 real 0m1.434s 00:05:34.747 user 0m1.307s 00:05:34.747 sys 0m0.128s 00:05:34.747 00:58:50 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.747 00:58:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:34.747 ************************************ 00:05:34.747 END TEST accel_fill 00:05:34.747 ************************************ 00:05:34.747 00:58:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.747 00:58:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:34.747 00:58:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.747 00:58:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.747 00:58:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.747 ************************************ 00:05:34.747 START TEST accel_copy_crc32c 00:05:34.747 ************************************ 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:34.747 [2024-07-16 00:58:50.488021] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:34.747 [2024-07-16 00:58:50.488080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038284 ] 00:05:34.747 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.747 [2024-07-16 00:58:50.543815] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.747 [2024-07-16 00:58:50.650469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:34.747 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 00:58:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.118 00:05:36.118 real 0m1.425s 00:05:36.118 user 0m1.300s 00:05:36.118 sys 0m0.128s 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.118 00:58:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:36.118 ************************************ 00:05:36.118 END TEST accel_copy_crc32c 00:05:36.118 ************************************ 00:05:36.118 00:58:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.118 00:58:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:36.118 00:58:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:36.118 00:58:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.118 00:58:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.118 ************************************ 00:05:36.118 START TEST accel_copy_crc32c_C2 00:05:36.118 ************************************ 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.118 00:58:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:36.118 [2024-07-16 00:58:51.964311] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:36.118 [2024-07-16 00:58:51.964376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038451 ] 00:05:36.118 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.118 [2024-07-16 00:58:52.020978] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.375 [2024-07-16 00:58:52.125596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.375 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.376 00:58:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.744 00:05:37.744 real 0m1.435s 00:05:37.744 user 0m1.297s 00:05:37.744 sys 0m0.140s 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.744 00:58:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:37.744 ************************************ 00:05:37.744 END TEST accel_copy_crc32c_C2 00:05:37.744 ************************************ 00:05:37.744 00:58:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.744 00:58:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:37.744 00:58:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.744 00:58:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.744 00:58:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.744 ************************************ 00:05:37.744 START TEST accel_dualcast 00:05:37.744 ************************************ 00:05:37.744 00:58:53 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:37.744 [2024-07-16 00:58:53.453789] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:37.744 [2024-07-16 00:58:53.453853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038604 ] 00:05:37.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.744 [2024-07-16 00:58:53.513904] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.744 [2024-07-16 00:58:53.618634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.744 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.745 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.745 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.745 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.745 00:58:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.745 00:58:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.745 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.745 00:58:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:39.112 00:58:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.112 00:05:39.112 real 0m1.430s 00:05:39.112 user 0m1.301s 00:05:39.112 sys 0m0.130s 00:05:39.112 00:58:54 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.112 00:58:54 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:39.112 ************************************ 00:05:39.112 END TEST accel_dualcast 00:05:39.112 ************************************ 00:05:39.112 00:58:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.112 00:58:54 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:39.112 00:58:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:39.112 00:58:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.112 00:58:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.112 ************************************ 00:05:39.112 START TEST accel_compare 00:05:39.112 ************************************ 00:05:39.112 00:58:54 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:39.112 00:58:54 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:39.112 [2024-07-16 00:58:54.923983] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:39.112 [2024-07-16 00:58:54.924055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038876 ] 00:05:39.112 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.112 [2024-07-16 00:58:54.979541] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.112 [2024-07-16 00:58:55.083446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.369 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.370 00:58:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:40.740 00:58:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.740 00:05:40.740 real 0m1.427s 00:05:40.740 user 0m1.292s 00:05:40.740 sys 0m0.135s 00:05:40.740 00:58:56 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.740 00:58:56 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:40.740 ************************************ 00:05:40.740 END TEST accel_compare 00:05:40.740 ************************************ 00:05:40.740 00:58:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.740 00:58:56 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:40.740 00:58:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.740 00:58:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.740 00:58:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.740 ************************************ 00:05:40.740 START TEST accel_xor 00:05:40.740 ************************************ 00:05:40.740 00:58:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:40.740 [2024-07-16 00:58:56.400674] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:40.740 [2024-07-16 00:58:56.400741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039034 ] 00:05:40.740 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.740 [2024-07-16 00:58:56.459174] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.740 [2024-07-16 00:58:56.571099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:40.740 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.741 00:58:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.111 00:05:42.111 real 0m1.430s 00:05:42.111 user 0m1.292s 00:05:42.111 sys 0m0.139s 00:05:42.111 00:58:57 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.111 00:58:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:42.111 ************************************ 00:05:42.111 END TEST accel_xor 00:05:42.111 ************************************ 00:05:42.111 00:58:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.111 00:58:57 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:42.111 00:58:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:42.111 00:58:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.111 00:58:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.111 ************************************ 00:05:42.111 START TEST accel_xor 00:05:42.111 ************************************ 00:05:42.111 00:58:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:42.111 00:58:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:42.111 [2024-07-16 00:58:57.880443] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:42.111 [2024-07-16 00:58:57.880507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039190 ] 00:05:42.111 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.111 [2024-07-16 00:58:57.937868] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.111 [2024-07-16 00:58:58.047304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.111 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.112 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.112 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.112 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:42.112 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.112 00:58:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.375 00:58:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.314 00:58:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:43.315 00:58:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.315 00:05:43.315 real 0m1.427s 00:05:43.315 user 0m1.300s 00:05:43.315 sys 0m0.130s 00:05:43.315 00:58:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.315 00:58:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:43.315 ************************************ 00:05:43.315 END TEST accel_xor 00:05:43.315 ************************************ 00:05:43.572 00:58:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.572 00:58:59 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:43.572 00:58:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:43.572 00:58:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.572 00:58:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.572 ************************************ 00:05:43.572 START TEST accel_dif_verify 00:05:43.572 ************************************ 00:05:43.572 00:58:59 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:43.572 00:58:59 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:43.572 [2024-07-16 00:58:59.360713] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:43.572 [2024-07-16 00:58:59.360772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039464 ] 00:05:43.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.572 [2024-07-16 00:58:59.416456] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.572 [2024-07-16 00:58:59.519662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:43.829 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:43.830 00:58:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:44.787 00:59:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.787 00:05:44.787 real 0m1.430s 00:05:44.787 user 0m1.303s 00:05:44.787 sys 0m0.130s 00:05:44.787 00:59:00 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.787 00:59:00 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:44.787 ************************************ 00:05:44.787 END TEST accel_dif_verify 00:05:44.787 ************************************ 00:05:45.045 00:59:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.045 00:59:00 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:45.045 00:59:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:45.045 00:59:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.045 00:59:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.045 ************************************ 00:05:45.045 START TEST accel_dif_generate 00:05:45.045 ************************************ 00:05:45.045 00:59:00 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:45.045 00:59:00 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:45.045 [2024-07-16 00:59:00.845413] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:45.045 [2024-07-16 00:59:00.845475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039617 ] 00:05:45.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.045 [2024-07-16 00:59:00.903175] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.045 [2024-07-16 00:59:01.013839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.303 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.304 00:59:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:46.673 00:59:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.673 00:05:46.673 real 0m1.420s 00:05:46.673 user 0m1.297s 00:05:46.673 sys 0m0.126s 00:05:46.673 00:59:02 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.673 00:59:02 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:46.673 ************************************ 00:05:46.673 END TEST accel_dif_generate 00:05:46.673 ************************************ 00:05:46.673 00:59:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.673 00:59:02 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:46.673 00:59:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:46.673 00:59:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.673 00:59:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.673 ************************************ 00:05:46.673 START TEST accel_dif_generate_copy 00:05:46.673 ************************************ 00:05:46.673 00:59:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:46.673 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:46.673 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:46.673 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.673 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:46.674 [2024-07-16 00:59:02.311521] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:46.674 [2024-07-16 00:59:02.311592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039786 ] 00:05:46.674 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.674 [2024-07-16 00:59:02.368425] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.674 [2024-07-16 00:59:02.472437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.674 00:59:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.042 00:05:48.042 real 0m1.427s 00:05:48.042 user 0m1.294s 00:05:48.042 sys 0m0.134s 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.042 00:59:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:48.042 ************************************ 00:05:48.042 END TEST accel_dif_generate_copy 00:05:48.042 ************************************ 00:05:48.042 00:59:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.042 00:59:03 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:48.042 00:59:03 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.042 00:59:03 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:48.042 00:59:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.042 00:59:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.042 ************************************ 00:05:48.042 START TEST accel_comp 00:05:48.042 ************************************ 00:05:48.042 00:59:03 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.042 00:59:03 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:48.042 00:59:03 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:48.042 00:59:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:48.043 00:59:03 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:48.043 [2024-07-16 00:59:03.787742] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:48.043 [2024-07-16 00:59:03.787803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039992 ] 00:05:48.043 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.043 [2024-07-16 00:59:03.845394] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.043 [2024-07-16 00:59:03.950998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.043 00:59:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:49.412 00:59:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.412 00:05:49.412 real 0m1.439s 00:05:49.412 user 0m1.304s 00:05:49.412 sys 0m0.137s 00:05:49.412 00:59:05 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.412 00:59:05 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:49.412 ************************************ 00:05:49.412 END TEST accel_comp 00:05:49.412 ************************************ 00:05:49.412 00:59:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.412 00:59:05 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.412 00:59:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:49.412 00:59:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.412 00:59:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.412 ************************************ 00:05:49.412 START TEST accel_decomp 00:05:49.412 ************************************ 00:05:49.412 00:59:05 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:49.412 00:59:05 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:49.412 [2024-07-16 00:59:05.276416] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:49.412 [2024-07-16 00:59:05.276479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040212 ] 00:05:49.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.413 [2024-07-16 00:59:05.331751] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.670 [2024-07-16 00:59:05.437888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.670 00:59:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.042 00:59:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.042 00:05:51.042 real 0m1.428s 00:05:51.042 user 0m1.295s 00:05:51.042 sys 0m0.136s 00:05:51.042 00:59:06 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.042 00:59:06 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:51.042 ************************************ 00:05:51.042 END TEST accel_decomp 00:05:51.042 ************************************ 00:05:51.042 00:59:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.042 00:59:06 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.042 00:59:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:51.042 00:59:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.042 00:59:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.042 ************************************ 00:05:51.042 START TEST accel_decomp_full 00:05:51.042 ************************************ 00:05:51.042 00:59:06 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:51.042 [2024-07-16 00:59:06.750614] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:51.042 [2024-07-16 00:59:06.750677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040371 ] 00:05:51.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.042 [2024-07-16 00:59:06.806502] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.042 [2024-07-16 00:59:06.908995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.042 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.043 00:59:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.416 00:59:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.416 00:05:52.416 real 0m1.438s 00:05:52.416 user 0m1.312s 00:05:52.416 sys 0m0.128s 00:05:52.416 00:59:08 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.416 00:59:08 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:52.416 ************************************ 00:05:52.416 END TEST accel_decomp_full 00:05:52.416 ************************************ 00:05:52.416 00:59:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.416 00:59:08 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.416 00:59:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:52.416 00:59:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.416 00:59:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.416 ************************************ 00:05:52.416 START TEST accel_decomp_mcore 00:05:52.416 ************************************ 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.416 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.417 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.417 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.417 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.417 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:52.417 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:52.417 [2024-07-16 00:59:08.238435] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:52.417 [2024-07-16 00:59:08.238498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040532 ] 00:05:52.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.417 [2024-07-16 00:59:08.298008] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.675 [2024-07-16 00:59:08.410572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.675 [2024-07-16 00:59:08.410664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.675 [2024-07-16 00:59:08.410734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.675 [2024-07-16 00:59:08.410739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.675 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.675 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.675 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.675 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.676 00:59:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.049 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.050 00:05:54.050 real 0m1.453s 00:05:54.050 user 0m4.741s 00:05:54.050 sys 0m0.142s 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.050 00:59:09 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:54.050 ************************************ 00:05:54.050 END TEST accel_decomp_mcore 00:05:54.050 ************************************ 00:05:54.050 00:59:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.050 00:59:09 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.050 00:59:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:54.050 00:59:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.050 00:59:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.050 ************************************ 00:05:54.050 START TEST accel_decomp_full_mcore 00:05:54.050 ************************************ 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:54.050 [2024-07-16 00:59:09.736837] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:54.050 [2024-07-16 00:59:09.736904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040801 ] 00:05:54.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.050 [2024-07-16 00:59:09.794223] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.050 [2024-07-16 00:59:09.900661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.050 [2024-07-16 00:59:09.900770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.050 [2024-07-16 00:59:09.900900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.050 [2024-07-16 00:59:09.900903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.050 00:59:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.424 00:05:55.424 real 0m1.460s 00:05:55.424 user 0m4.785s 00:05:55.424 sys 0m0.151s 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.424 00:59:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:55.424 ************************************ 00:05:55.424 END TEST accel_decomp_full_mcore 00:05:55.424 ************************************ 00:05:55.424 00:59:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.424 00:59:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.424 00:59:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:55.424 00:59:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.424 00:59:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.424 ************************************ 00:05:55.424 START TEST accel_decomp_mthread 00:05:55.424 ************************************ 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:55.424 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:55.424 [2024-07-16 00:59:11.242710] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:55.424 [2024-07-16 00:59:11.242773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040966 ] 00:05:55.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.424 [2024-07-16 00:59:11.301565] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.424 [2024-07-16 00:59:11.417621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.683 00:59:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.056 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.057 00:05:57.057 real 0m1.442s 00:05:57.057 user 0m1.300s 00:05:57.057 sys 0m0.145s 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.057 00:59:12 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:57.057 ************************************ 00:05:57.057 END TEST accel_decomp_mthread 00:05:57.057 ************************************ 00:05:57.057 00:59:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.057 00:59:12 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:57.057 00:59:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:57.057 00:59:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.057 00:59:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.057 ************************************ 00:05:57.057 START TEST accel_decomp_full_mthread 00:05:57.057 ************************************ 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:57.057 [2024-07-16 00:59:12.733410] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:57.057 [2024-07-16 00:59:12.733472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041126 ] 00:05:57.057 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.057 [2024-07-16 00:59:12.790327] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.057 [2024-07-16 00:59:12.896523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.057 00:59:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.428 00:05:58.428 real 0m1.460s 00:05:58.428 user 0m1.332s 00:05:58.428 sys 0m0.130s 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.428 00:59:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:58.428 ************************************ 00:05:58.428 END TEST accel_decomp_full_mthread 00:05:58.428 ************************************ 00:05:58.428 00:59:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.428 00:59:14 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:58.428 00:59:14 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.428 00:59:14 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:58.428 00:59:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.428 00:59:14 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:58.428 00:59:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.428 00:59:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.428 00:59:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.428 00:59:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.428 00:59:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.428 00:59:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.428 00:59:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:58.428 00:59:14 accel -- accel/accel.sh@41 -- # jq -r . 00:05:58.428 ************************************ 00:05:58.428 START TEST accel_dif_functional_tests 00:05:58.428 ************************************ 00:05:58.428 00:59:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.428 [2024-07-16 00:59:14.258687] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:58.428 [2024-07-16 00:59:14.258748] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041401 ] 00:05:58.428 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.428 [2024-07-16 00:59:14.313027] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.428 [2024-07-16 00:59:14.418797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.428 [2024-07-16 00:59:14.418820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.428 [2024-07-16 00:59:14.418823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.686 00:05:58.686 00:05:58.686 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.686 http://cunit.sourceforge.net/ 00:05:58.686 00:05:58.686 00:05:58.686 Suite: accel_dif 00:05:58.686 Test: verify: DIF generated, GUARD check ...passed 00:05:58.686 Test: verify: DIF generated, APPTAG check ...passed 00:05:58.686 Test: verify: DIF generated, REFTAG check ...passed 00:05:58.686 Test: verify: DIF not generated, GUARD check ...[2024-07-16 00:59:14.514624] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:58.686 passed 00:05:58.686 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 00:59:14.514712] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:58.686 passed 00:05:58.686 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 00:59:14.514746] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:58.686 passed 00:05:58.686 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:58.686 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 00:59:14.514819] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:58.686 passed 00:05:58.686 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:58.686 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:58.686 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:58.686 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 00:59:14.514978] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:58.686 passed 00:05:58.686 Test: verify copy: DIF generated, GUARD check ...passed 00:05:58.686 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:58.686 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:58.686 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 00:59:14.515165] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:58.686 passed 00:05:58.686 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 00:59:14.515205] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:58.686 passed 00:05:58.686 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 00:59:14.515243] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:58.686 passed 00:05:58.686 Test: generate copy: DIF generated, GUARD check ...passed 00:05:58.686 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:58.686 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:58.686 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:58.686 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:58.686 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:58.686 Test: generate copy: iovecs-len validate ...[2024-07-16 00:59:14.515502] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:58.686 passed 00:05:58.686 Test: generate copy: buffer alignment validate ...passed 00:05:58.686 00:05:58.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.686 suites 1 1 n/a 0 0 00:05:58.686 tests 26 26 26 0 0 00:05:58.686 asserts 115 115 115 0 n/a 00:05:58.686 00:05:58.686 Elapsed time = 0.003 seconds 00:05:58.944 00:05:58.944 real 0m0.536s 00:05:58.944 user 0m0.825s 00:05:58.944 sys 0m0.174s 00:05:58.944 00:59:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.944 00:59:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:58.944 ************************************ 00:05:58.944 END TEST accel_dif_functional_tests 00:05:58.944 ************************************ 00:05:58.944 00:59:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.944 00:05:58.944 real 0m32.413s 00:05:58.944 user 0m36.021s 00:05:58.944 sys 0m4.357s 00:05:58.944 00:59:14 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.944 00:59:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.944 ************************************ 00:05:58.944 END TEST accel 00:05:58.944 ************************************ 00:05:58.944 00:59:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.944 00:59:14 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:58.944 00:59:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.944 00:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.944 00:59:14 -- common/autotest_common.sh@10 -- # set +x 00:05:58.944 ************************************ 00:05:58.944 START TEST accel_rpc 00:05:58.944 ************************************ 00:05:58.944 00:59:14 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:58.944 * Looking for test storage... 00:05:58.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:58.944 00:59:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.944 00:59:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4041471 00:05:58.944 00:59:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:58.944 00:59:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4041471 00:05:58.944 00:59:14 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 4041471 ']' 00:05:58.944 00:59:14 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.944 00:59:14 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.944 00:59:14 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.944 00:59:14 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.944 00:59:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.944 [2024-07-16 00:59:14.910909] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:05:58.944 [2024-07-16 00:59:14.911013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041471 ] 00:05:58.944 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.202 [2024-07-16 00:59:14.967998] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.202 [2024-07-16 00:59:15.079974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.202 00:59:15 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.202 00:59:15 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:59.202 00:59:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:59.202 00:59:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:59.202 00:59:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:59.202 00:59:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:59.202 00:59:15 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:59.202 00:59:15 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.202 00:59:15 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.202 00:59:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.202 ************************************ 00:05:59.202 START TEST accel_assign_opcode 00:05:59.202 ************************************ 00:05:59.202 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:59.202 00:59:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:59.202 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.202 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.203 [2024-07-16 00:59:15.148588] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.203 [2024-07-16 00:59:15.156601] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.203 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.461 software 00:05:59.461 00:05:59.461 real 0m0.274s 00:05:59.461 user 0m0.038s 00:05:59.461 sys 0m0.008s 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.461 00:59:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.461 ************************************ 00:05:59.461 END TEST accel_assign_opcode 00:05:59.461 ************************************ 00:05:59.461 00:59:15 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.461 00:59:15 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4041471 00:05:59.461 00:59:15 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 4041471 ']' 00:05:59.461 00:59:15 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 4041471 00:05:59.461 00:59:15 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:59.461 00:59:15 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.461 00:59:15 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4041471 00:05:59.719 00:59:15 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.719 00:59:15 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.719 00:59:15 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4041471' 00:05:59.719 killing process with pid 4041471 00:05:59.719 00:59:15 accel_rpc -- common/autotest_common.sh@967 -- # kill 4041471 00:05:59.719 00:59:15 accel_rpc -- common/autotest_common.sh@972 -- # wait 4041471 00:05:59.977 00:05:59.977 real 0m1.074s 00:05:59.977 user 0m1.035s 00:05:59.977 sys 0m0.398s 00:05:59.977 00:59:15 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.977 00:59:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.977 ************************************ 00:05:59.977 END TEST accel_rpc 00:05:59.977 ************************************ 00:05:59.977 00:59:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.977 00:59:15 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:59.977 00:59:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.977 00:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.977 00:59:15 -- common/autotest_common.sh@10 -- # set +x 00:05:59.977 ************************************ 00:05:59.977 START TEST app_cmdline 00:05:59.977 ************************************ 00:05:59.977 00:59:15 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:00.236 * Looking for test storage... 00:06:00.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:00.236 00:59:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:00.236 00:59:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4041675 00:06:00.236 00:59:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:00.236 00:59:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4041675 00:06:00.236 00:59:15 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 4041675 ']' 00:06:00.236 00:59:15 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.236 00:59:15 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.236 00:59:15 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.236 00:59:15 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.236 00:59:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.236 [2024-07-16 00:59:16.039886] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:06:00.236 [2024-07-16 00:59:16.039989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041675 ] 00:06:00.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.236 [2024-07-16 00:59:16.096122] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.236 [2024-07-16 00:59:16.200638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.494 00:59:16 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.494 00:59:16 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:00.494 00:59:16 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:00.752 { 00:06:00.752 "version": "SPDK v24.09-pre git sha1 fd0bbcfdd", 00:06:00.752 "fields": { 00:06:00.752 "major": 24, 00:06:00.752 "minor": 9, 00:06:00.752 "patch": 0, 00:06:00.752 "suffix": "-pre", 00:06:00.752 "commit": "fd0bbcfdd" 00:06:00.752 } 00:06:00.752 } 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:00.752 00:59:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:00.752 00:59:16 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.011 request: 00:06:01.011 { 00:06:01.011 "method": "env_dpdk_get_mem_stats", 00:06:01.011 "req_id": 1 00:06:01.011 } 00:06:01.011 Got JSON-RPC error response 00:06:01.011 response: 00:06:01.011 { 00:06:01.011 "code": -32601, 00:06:01.011 "message": "Method not found" 00:06:01.011 } 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.011 00:59:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4041675 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 4041675 ']' 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 4041675 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4041675 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4041675' 00:06:01.011 killing process with pid 4041675 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@967 -- # kill 4041675 00:06:01.011 00:59:16 app_cmdline -- common/autotest_common.sh@972 -- # wait 4041675 00:06:01.577 00:06:01.577 real 0m1.443s 00:06:01.577 user 0m1.766s 00:06:01.577 sys 0m0.439s 00:06:01.577 00:59:17 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.577 00:59:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.577 ************************************ 00:06:01.577 END TEST app_cmdline 00:06:01.577 ************************************ 00:06:01.577 00:59:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.577 00:59:17 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:01.577 00:59:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.577 00:59:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.577 00:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.577 ************************************ 00:06:01.577 START TEST version 00:06:01.577 ************************************ 00:06:01.577 00:59:17 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:01.577 * Looking for test storage... 00:06:01.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:01.577 00:59:17 version -- app/version.sh@17 -- # get_header_version major 00:06:01.577 00:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.577 00:59:17 version -- app/version.sh@17 -- # major=24 00:06:01.577 00:59:17 version -- app/version.sh@18 -- # get_header_version minor 00:06:01.577 00:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.577 00:59:17 version -- app/version.sh@18 -- # minor=9 00:06:01.577 00:59:17 version -- app/version.sh@19 -- # get_header_version patch 00:06:01.577 00:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.577 00:59:17 version -- app/version.sh@19 -- # patch=0 00:06:01.577 00:59:17 version -- app/version.sh@20 -- # get_header_version suffix 00:06:01.577 00:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # cut -f2 00:06:01.577 00:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.577 00:59:17 version -- app/version.sh@20 -- # suffix=-pre 00:06:01.577 00:59:17 version -- app/version.sh@22 -- # version=24.9 00:06:01.577 00:59:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:01.577 00:59:17 version -- app/version.sh@28 -- # version=24.9rc0 00:06:01.577 00:59:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:01.577 00:59:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:01.577 00:59:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:01.577 00:59:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:01.577 00:06:01.577 real 0m0.110s 00:06:01.577 user 0m0.055s 00:06:01.577 sys 0m0.078s 00:06:01.577 00:59:17 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.577 00:59:17 version -- common/autotest_common.sh@10 -- # set +x 00:06:01.577 ************************************ 00:06:01.577 END TEST version 00:06:01.577 ************************************ 00:06:01.577 00:59:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.577 00:59:17 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:01.577 00:59:17 -- spdk/autotest.sh@198 -- # uname -s 00:06:01.577 00:59:17 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:01.577 00:59:17 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:01.577 00:59:17 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:01.577 00:59:17 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:01.577 00:59:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:01.577 00:59:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:01.577 00:59:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.577 00:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.836 00:59:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:01.836 00:59:17 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:01.836 00:59:17 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:01.836 00:59:17 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:01.836 00:59:17 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:01.836 00:59:17 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:01.836 00:59:17 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:01.836 00:59:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:01.836 00:59:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.836 00:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.836 ************************************ 00:06:01.836 START TEST nvmf_tcp 00:06:01.836 ************************************ 00:06:01.836 00:59:17 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:01.836 * Looking for test storage... 00:06:01.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.836 00:59:17 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.836 00:59:17 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.836 00:59:17 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.836 00:59:17 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.836 00:59:17 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.836 00:59:17 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.836 00:59:17 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:01.836 00:59:17 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.836 00:59:17 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:01.837 00:59:17 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.837 00:59:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:01.837 00:59:17 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:01.837 00:59:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:01.837 00:59:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.837 00:59:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.837 ************************************ 00:06:01.837 START TEST nvmf_example 00:06:01.837 ************************************ 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:01.837 * Looking for test storage... 00:06:01.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:01.837 00:59:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:04.395 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:04.395 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:04.395 Found net devices under 0000:09:00.0: cvl_0_0 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:04.395 Found net devices under 0000:09:00.1: cvl_0_1 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:04.395 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:04.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:06:04.396 00:06:04.396 --- 10.0.0.2 ping statistics --- 00:06:04.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.396 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:06:04.396 00:06:04.396 --- 10.0.0.1 ping statistics --- 00:06:04.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.396 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4043690 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4043690 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 4043690 ']' 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.396 00:59:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.396 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.974 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.232 00:59:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 00:59:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.232 00:59:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.232 00:59:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.232 00:59:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 00:59:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.232 00:59:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:05.232 00:59:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:05.232 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.424 Initializing NVMe Controllers 00:06:17.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:17.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:17.424 Initialization complete. Launching workers. 00:06:17.424 ======================================================== 00:06:17.424 Latency(us) 00:06:17.424 Device Information : IOPS MiB/s Average min max 00:06:17.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15211.70 59.42 4207.16 746.58 15343.31 00:06:17.424 ======================================================== 00:06:17.425 Total : 15211.70 59.42 4207.16 746.58 15343.31 00:06:17.425 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:17.425 rmmod nvme_tcp 00:06:17.425 rmmod nvme_fabrics 00:06:17.425 rmmod nvme_keyring 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4043690 ']' 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4043690 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 4043690 ']' 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 4043690 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4043690 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4043690' 00:06:17.425 killing process with pid 4043690 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 4043690 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 4043690 00:06:17.425 nvmf threads initialize successfully 00:06:17.425 bdev subsystem init successfully 00:06:17.425 created a nvmf target service 00:06:17.425 create targets's poll groups done 00:06:17.425 all subsystems of target started 00:06:17.425 nvmf target is running 00:06:17.425 all subsystems of target stopped 00:06:17.425 destroy targets's poll groups done 00:06:17.425 destroyed the nvmf target service 00:06:17.425 bdev subsystem finish successfully 00:06:17.425 nvmf threads destroy successfully 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.425 00:59:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.682 00:59:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.682 00:59:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:17.682 00:59:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.682 00:59:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.682 00:06:17.682 real 0m15.960s 00:06:17.682 user 0m45.356s 00:06:17.682 sys 0m3.244s 00:06:17.682 00:59:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.682 00:59:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.682 ************************************ 00:06:17.682 END TEST nvmf_example 00:06:17.682 ************************************ 00:06:17.943 00:59:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:17.943 00:59:33 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:17.943 00:59:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.943 00:59:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.943 00:59:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.943 ************************************ 00:06:17.943 START TEST nvmf_filesystem 00:06:17.943 ************************************ 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:17.943 * Looking for test storage... 00:06:17.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:17.943 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:17.944 #define SPDK_CONFIG_H 00:06:17.944 #define SPDK_CONFIG_APPS 1 00:06:17.944 #define SPDK_CONFIG_ARCH native 00:06:17.944 #undef SPDK_CONFIG_ASAN 00:06:17.944 #undef SPDK_CONFIG_AVAHI 00:06:17.944 #undef SPDK_CONFIG_CET 00:06:17.944 #define SPDK_CONFIG_COVERAGE 1 00:06:17.944 #define SPDK_CONFIG_CROSS_PREFIX 00:06:17.944 #undef SPDK_CONFIG_CRYPTO 00:06:17.944 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:17.944 #undef SPDK_CONFIG_CUSTOMOCF 00:06:17.944 #undef SPDK_CONFIG_DAOS 00:06:17.944 #define SPDK_CONFIG_DAOS_DIR 00:06:17.944 #define SPDK_CONFIG_DEBUG 1 00:06:17.944 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:17.944 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:17.944 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:17.944 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:17.944 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:17.944 #undef SPDK_CONFIG_DPDK_UADK 00:06:17.944 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:17.944 #define SPDK_CONFIG_EXAMPLES 1 00:06:17.944 #undef SPDK_CONFIG_FC 00:06:17.944 #define SPDK_CONFIG_FC_PATH 00:06:17.944 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:17.944 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:17.944 #undef SPDK_CONFIG_FUSE 00:06:17.944 #undef SPDK_CONFIG_FUZZER 00:06:17.944 #define SPDK_CONFIG_FUZZER_LIB 00:06:17.944 #undef SPDK_CONFIG_GOLANG 00:06:17.944 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:17.944 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:17.944 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:17.944 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:17.944 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:17.944 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:17.944 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:17.944 #define SPDK_CONFIG_IDXD 1 00:06:17.944 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:17.944 #undef SPDK_CONFIG_IPSEC_MB 00:06:17.944 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:17.944 #define SPDK_CONFIG_ISAL 1 00:06:17.944 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:17.944 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:17.944 #define SPDK_CONFIG_LIBDIR 00:06:17.944 #undef SPDK_CONFIG_LTO 00:06:17.944 #define SPDK_CONFIG_MAX_LCORES 128 00:06:17.944 #define SPDK_CONFIG_NVME_CUSE 1 00:06:17.944 #undef SPDK_CONFIG_OCF 00:06:17.944 #define SPDK_CONFIG_OCF_PATH 00:06:17.944 #define SPDK_CONFIG_OPENSSL_PATH 00:06:17.944 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:17.944 #define SPDK_CONFIG_PGO_DIR 00:06:17.944 #undef SPDK_CONFIG_PGO_USE 00:06:17.944 #define SPDK_CONFIG_PREFIX /usr/local 00:06:17.944 #undef SPDK_CONFIG_RAID5F 00:06:17.944 #undef SPDK_CONFIG_RBD 00:06:17.944 #define SPDK_CONFIG_RDMA 1 00:06:17.944 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:17.944 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:17.944 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:17.944 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:17.944 #define SPDK_CONFIG_SHARED 1 00:06:17.944 #undef SPDK_CONFIG_SMA 00:06:17.944 #define SPDK_CONFIG_TESTS 1 00:06:17.944 #undef SPDK_CONFIG_TSAN 00:06:17.944 #define SPDK_CONFIG_UBLK 1 00:06:17.944 #define SPDK_CONFIG_UBSAN 1 00:06:17.944 #undef SPDK_CONFIG_UNIT_TESTS 00:06:17.944 #undef SPDK_CONFIG_URING 00:06:17.944 #define SPDK_CONFIG_URING_PATH 00:06:17.944 #undef SPDK_CONFIG_URING_ZNS 00:06:17.944 #undef SPDK_CONFIG_USDT 00:06:17.944 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:17.944 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:17.944 #define SPDK_CONFIG_VFIO_USER 1 00:06:17.944 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:17.944 #define SPDK_CONFIG_VHOST 1 00:06:17.944 #define SPDK_CONFIG_VIRTIO 1 00:06:17.944 #undef SPDK_CONFIG_VTUNE 00:06:17.944 #define SPDK_CONFIG_VTUNE_DIR 00:06:17.944 #define SPDK_CONFIG_WERROR 1 00:06:17.944 #define SPDK_CONFIG_WPDK_DIR 00:06:17.944 #undef SPDK_CONFIG_XNVME 00:06:17.944 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:17.944 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:17.945 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 4045401 ]] 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 4045401 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.hb7A73 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hb7A73/tests/target /tmp/spdk.hb7A73 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:17.946 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=51321077760 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994725376 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10673647616 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30992650240 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8765440 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996852736 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=512000 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:17.947 * Looking for test storage... 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=51321077760 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12888240128 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.947 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:17.948 00:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:20.479 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.479 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:20.480 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:20.480 Found net devices under 0000:09:00.0: cvl_0_0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:20.480 Found net devices under 0000:09:00.1: cvl_0_1 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:20.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:06:20.480 00:06:20.480 --- 10.0.0.2 ping statistics --- 00:06:20.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.480 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:06:20.480 00:06:20.480 --- 10.0.0.1 ping statistics --- 00:06:20.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.480 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.480 ************************************ 00:06:20.480 START TEST nvmf_filesystem_no_in_capsule 00:06:20.480 ************************************ 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4047024 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4047024 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4047024 ']' 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.480 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.480 [2024-07-16 00:59:36.314329] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:06:20.480 [2024-07-16 00:59:36.314405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.480 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.480 [2024-07-16 00:59:36.384426] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.739 [2024-07-16 00:59:36.498499] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.739 [2024-07-16 00:59:36.498554] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.739 [2024-07-16 00:59:36.498584] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.739 [2024-07-16 00:59:36.498595] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.739 [2024-07-16 00:59:36.498609] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.739 [2024-07-16 00:59:36.498659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.739 [2024-07-16 00:59:36.498720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.739 [2024-07-16 00:59:36.498785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.739 [2024-07-16 00:59:36.498788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:20.739 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.740 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.740 [2024-07-16 00:59:36.654700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.740 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.740 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:20.740 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.740 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.998 Malloc1 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.998 [2024-07-16 00:59:36.831281] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:20.998 { 00:06:20.998 "name": "Malloc1", 00:06:20.998 "aliases": [ 00:06:20.998 "760bbaed-40d7-4b41-bc6b-f3b2bdf20319" 00:06:20.998 ], 00:06:20.998 "product_name": "Malloc disk", 00:06:20.998 "block_size": 512, 00:06:20.998 "num_blocks": 1048576, 00:06:20.998 "uuid": "760bbaed-40d7-4b41-bc6b-f3b2bdf20319", 00:06:20.998 "assigned_rate_limits": { 00:06:20.998 "rw_ios_per_sec": 0, 00:06:20.998 "rw_mbytes_per_sec": 0, 00:06:20.998 "r_mbytes_per_sec": 0, 00:06:20.998 "w_mbytes_per_sec": 0 00:06:20.998 }, 00:06:20.998 "claimed": true, 00:06:20.998 "claim_type": "exclusive_write", 00:06:20.998 "zoned": false, 00:06:20.998 "supported_io_types": { 00:06:20.998 "read": true, 00:06:20.998 "write": true, 00:06:20.998 "unmap": true, 00:06:20.998 "flush": true, 00:06:20.998 "reset": true, 00:06:20.998 "nvme_admin": false, 00:06:20.998 "nvme_io": false, 00:06:20.998 "nvme_io_md": false, 00:06:20.998 "write_zeroes": true, 00:06:20.998 "zcopy": true, 00:06:20.998 "get_zone_info": false, 00:06:20.998 "zone_management": false, 00:06:20.998 "zone_append": false, 00:06:20.998 "compare": false, 00:06:20.998 "compare_and_write": false, 00:06:20.998 "abort": true, 00:06:20.998 "seek_hole": false, 00:06:20.998 "seek_data": false, 00:06:20.998 "copy": true, 00:06:20.998 "nvme_iov_md": false 00:06:20.998 }, 00:06:20.998 "memory_domains": [ 00:06:20.998 { 00:06:20.998 "dma_device_id": "system", 00:06:20.998 "dma_device_type": 1 00:06:20.998 }, 00:06:20.998 { 00:06:20.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.998 "dma_device_type": 2 00:06:20.998 } 00:06:20.998 ], 00:06:20.998 "driver_specific": {} 00:06:20.998 } 00:06:20.998 ]' 00:06:20.998 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:20.999 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:20.999 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:20.999 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:20.999 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:20.999 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:20.999 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:20.999 00:59:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:21.929 00:59:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:21.929 00:59:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:21.929 00:59:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:21.929 00:59:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:21.929 00:59:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:23.823 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:23.824 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:24.082 00:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:24.338 00:59:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.708 ************************************ 00:06:25.708 START TEST filesystem_ext4 00:06:25.708 ************************************ 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:25.708 00:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:25.708 mke2fs 1.46.5 (30-Dec-2021) 00:06:25.708 Discarding device blocks: 0/522240 done 00:06:25.708 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:25.708 Filesystem UUID: 9256d231-e347-4944-b661-986748d86229 00:06:25.708 Superblock backups stored on blocks: 00:06:25.708 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:25.708 00:06:25.708 Allocating group tables: 0/64 done 00:06:25.708 Writing inode tables: 0/64 done 00:06:26.635 Creating journal (8192 blocks): done 00:06:27.197 Writing superblocks and filesystem accounting information: 0/64 done 00:06:27.197 00:06:27.197 00:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:27.197 00:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4047024 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:27.197 00:06:27.197 real 0m1.856s 00:06:27.197 user 0m0.020s 00:06:27.197 sys 0m0.053s 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:27.197 ************************************ 00:06:27.197 END TEST filesystem_ext4 00:06:27.197 ************************************ 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.197 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.457 ************************************ 00:06:27.457 START TEST filesystem_btrfs 00:06:27.457 ************************************ 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:27.457 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:27.457 btrfs-progs v6.6.2 00:06:27.457 See https://btrfs.readthedocs.io for more information. 00:06:27.457 00:06:27.457 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:27.457 NOTE: several default settings have changed in version 5.15, please make sure 00:06:27.457 this does not affect your deployments: 00:06:27.457 - DUP for metadata (-m dup) 00:06:27.457 - enabled no-holes (-O no-holes) 00:06:27.457 - enabled free-space-tree (-R free-space-tree) 00:06:27.457 00:06:27.458 Label: (null) 00:06:27.458 UUID: a10c1d38-4825-4b60-8e99-16f43eaf8470 00:06:27.458 Node size: 16384 00:06:27.458 Sector size: 4096 00:06:27.458 Filesystem size: 510.00MiB 00:06:27.458 Block group profiles: 00:06:27.458 Data: single 8.00MiB 00:06:27.458 Metadata: DUP 32.00MiB 00:06:27.458 System: DUP 8.00MiB 00:06:27.458 SSD detected: yes 00:06:27.458 Zoned device: no 00:06:27.458 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:27.458 Runtime features: free-space-tree 00:06:27.458 Checksum: crc32c 00:06:27.458 Number of devices: 1 00:06:27.458 Devices: 00:06:27.458 ID SIZE PATH 00:06:27.458 1 510.00MiB /dev/nvme0n1p1 00:06:27.458 00:06:27.458 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:27.458 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4047024 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:28.021 00:06:28.021 real 0m0.644s 00:06:28.021 user 0m0.020s 00:06:28.021 sys 0m0.116s 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:28.021 ************************************ 00:06:28.021 END TEST filesystem_btrfs 00:06:28.021 ************************************ 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.021 ************************************ 00:06:28.021 START TEST filesystem_xfs 00:06:28.021 ************************************ 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:28.021 00:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:28.021 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:28.021 = sectsz=512 attr=2, projid32bit=1 00:06:28.021 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:28.021 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:28.021 data = bsize=4096 blocks=130560, imaxpct=25 00:06:28.021 = sunit=0 swidth=0 blks 00:06:28.021 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:28.021 log =internal log bsize=4096 blocks=16384, version=2 00:06:28.021 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:28.021 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:28.977 Discarding blocks...Done. 00:06:28.977 00:59:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:28.977 00:59:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4047024 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:30.872 00:06:30.872 real 0m2.681s 00:06:30.872 user 0m0.016s 00:06:30.872 sys 0m0.056s 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:30.872 ************************************ 00:06:30.872 END TEST filesystem_xfs 00:06:30.872 ************************************ 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:30.872 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:31.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4047024 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4047024 ']' 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4047024 00:06:31.130 00:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:31.130 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.130 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4047024 00:06:31.130 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.130 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.130 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4047024' 00:06:31.130 killing process with pid 4047024 00:06:31.130 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 4047024 00:06:31.130 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 4047024 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:31.697 00:06:31.697 real 0m11.245s 00:06:31.697 user 0m43.019s 00:06:31.697 sys 0m1.642s 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 ************************************ 00:06:31.697 END TEST nvmf_filesystem_no_in_capsule 00:06:31.697 ************************************ 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 ************************************ 00:06:31.697 START TEST nvmf_filesystem_in_capsule 00:06:31.697 ************************************ 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4048582 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4048582 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4048582 ']' 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.697 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 [2024-07-16 00:59:47.603641] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:06:31.697 [2024-07-16 00:59:47.603718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.697 [2024-07-16 00:59:47.667771] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.980 [2024-07-16 00:59:47.776258] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.980 [2024-07-16 00:59:47.776311] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.980 [2024-07-16 00:59:47.776340] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.980 [2024-07-16 00:59:47.776351] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.980 [2024-07-16 00:59:47.776361] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.980 [2024-07-16 00:59:47.776415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.980 [2024-07-16 00:59:47.776473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.980 [2024-07-16 00:59:47.776542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.980 [2024-07-16 00:59:47.776545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.980 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.981 [2024-07-16 00:59:47.939879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.981 00:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.239 Malloc1 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.239 [2024-07-16 00:59:48.126606] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.239 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:32.239 { 00:06:32.239 "name": "Malloc1", 00:06:32.239 "aliases": [ 00:06:32.239 "06103c71-3ba3-466c-a6f6-2c45d2992799" 00:06:32.239 ], 00:06:32.239 "product_name": "Malloc disk", 00:06:32.239 "block_size": 512, 00:06:32.239 "num_blocks": 1048576, 00:06:32.239 "uuid": "06103c71-3ba3-466c-a6f6-2c45d2992799", 00:06:32.239 "assigned_rate_limits": { 00:06:32.239 "rw_ios_per_sec": 0, 00:06:32.239 "rw_mbytes_per_sec": 0, 00:06:32.239 "r_mbytes_per_sec": 0, 00:06:32.239 "w_mbytes_per_sec": 0 00:06:32.239 }, 00:06:32.239 "claimed": true, 00:06:32.239 "claim_type": "exclusive_write", 00:06:32.239 "zoned": false, 00:06:32.239 "supported_io_types": { 00:06:32.239 "read": true, 00:06:32.239 "write": true, 00:06:32.239 "unmap": true, 00:06:32.239 "flush": true, 00:06:32.239 "reset": true, 00:06:32.240 "nvme_admin": false, 00:06:32.240 "nvme_io": false, 00:06:32.240 "nvme_io_md": false, 00:06:32.240 "write_zeroes": true, 00:06:32.240 "zcopy": true, 00:06:32.240 "get_zone_info": false, 00:06:32.240 "zone_management": false, 00:06:32.240 "zone_append": false, 00:06:32.240 "compare": false, 00:06:32.240 "compare_and_write": false, 00:06:32.240 "abort": true, 00:06:32.240 "seek_hole": false, 00:06:32.240 "seek_data": false, 00:06:32.240 "copy": true, 00:06:32.240 "nvme_iov_md": false 00:06:32.240 }, 00:06:32.240 "memory_domains": [ 00:06:32.240 { 00:06:32.240 "dma_device_id": "system", 00:06:32.240 "dma_device_type": 1 00:06:32.240 }, 00:06:32.240 { 00:06:32.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.240 "dma_device_type": 2 00:06:32.240 } 00:06:32.240 ], 00:06:32.240 "driver_specific": {} 00:06:32.240 } 00:06:32.240 ]' 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:32.240 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:33.211 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:33.211 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:33.211 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:33.211 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:33.211 00:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:35.109 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:35.110 00:59:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:35.367 00:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.739 ************************************ 00:06:36.739 START TEST filesystem_in_capsule_ext4 00:06:36.739 ************************************ 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:36.739 00:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:36.739 mke2fs 1.46.5 (30-Dec-2021) 00:06:36.739 Discarding device blocks: 0/522240 done 00:06:36.739 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:36.739 Filesystem UUID: 5383ce14-bbf1-43e3-932c-9c85042f5fbd 00:06:36.739 Superblock backups stored on blocks: 00:06:36.739 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:36.739 00:06:36.739 Allocating group tables: 0/64 done 00:06:36.739 Writing inode tables: 0/64 done 00:06:36.739 Creating journal (8192 blocks): done 00:06:37.865 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:06:37.865 00:06:37.865 00:59:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:37.865 00:59:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:38.428 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4048582 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:38.685 00:06:38.685 real 0m2.170s 00:06:38.685 user 0m0.022s 00:06:38.685 sys 0m0.054s 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.685 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:38.686 ************************************ 00:06:38.686 END TEST filesystem_in_capsule_ext4 00:06:38.686 ************************************ 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.686 ************************************ 00:06:38.686 START TEST filesystem_in_capsule_btrfs 00:06:38.686 ************************************ 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:38.686 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:38.942 btrfs-progs v6.6.2 00:06:38.942 See https://btrfs.readthedocs.io for more information. 00:06:38.942 00:06:38.942 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:38.942 NOTE: several default settings have changed in version 5.15, please make sure 00:06:38.942 this does not affect your deployments: 00:06:38.942 - DUP for metadata (-m dup) 00:06:38.942 - enabled no-holes (-O no-holes) 00:06:38.942 - enabled free-space-tree (-R free-space-tree) 00:06:38.942 00:06:38.942 Label: (null) 00:06:38.942 UUID: a88498ac-8418-488e-9e37-84b5bfccb67a 00:06:38.942 Node size: 16384 00:06:38.942 Sector size: 4096 00:06:38.942 Filesystem size: 510.00MiB 00:06:38.943 Block group profiles: 00:06:38.943 Data: single 8.00MiB 00:06:38.943 Metadata: DUP 32.00MiB 00:06:38.943 System: DUP 8.00MiB 00:06:38.943 SSD detected: yes 00:06:38.943 Zoned device: no 00:06:38.943 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:38.943 Runtime features: free-space-tree 00:06:38.943 Checksum: crc32c 00:06:38.943 Number of devices: 1 00:06:38.943 Devices: 00:06:38.943 ID SIZE PATH 00:06:38.943 1 510.00MiB /dev/nvme0n1p1 00:06:38.943 00:06:38.943 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:38.943 00:59:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4048582 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:39.896 00:06:39.896 real 0m1.018s 00:06:39.896 user 0m0.023s 00:06:39.896 sys 0m0.109s 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:39.896 ************************************ 00:06:39.896 END TEST filesystem_in_capsule_btrfs 00:06:39.896 ************************************ 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.896 ************************************ 00:06:39.896 START TEST filesystem_in_capsule_xfs 00:06:39.896 ************************************ 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:39.896 00:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:39.896 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:39.896 = sectsz=512 attr=2, projid32bit=1 00:06:39.896 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:39.896 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:39.896 data = bsize=4096 blocks=130560, imaxpct=25 00:06:39.896 = sunit=0 swidth=0 blks 00:06:39.896 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:39.896 log =internal log bsize=4096 blocks=16384, version=2 00:06:39.896 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:39.896 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:40.824 Discarding blocks...Done. 00:06:40.824 00:59:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:40.824 00:59:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4048582 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.717 00:06:42.717 real 0m3.032s 00:06:42.717 user 0m0.014s 00:06:42.717 sys 0m0.061s 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:42.717 ************************************ 00:06:42.717 END TEST filesystem_in_capsule_xfs 00:06:42.717 ************************************ 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:42.717 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:42.973 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:42.973 00:59:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:43.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4048582 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4048582 ']' 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4048582 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4048582 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4048582' 00:06:43.229 killing process with pid 4048582 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 4048582 00:06:43.229 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 4048582 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:43.794 00:06:43.794 real 0m12.025s 00:06:43.794 user 0m46.072s 00:06:43.794 sys 0m1.753s 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.794 ************************************ 00:06:43.794 END TEST nvmf_filesystem_in_capsule 00:06:43.794 ************************************ 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.794 rmmod nvme_tcp 00:06:43.794 rmmod nvme_fabrics 00:06:43.794 rmmod nvme_keyring 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.794 00:59:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.706 01:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.706 00:06:45.706 real 0m27.989s 00:06:45.706 user 1m30.042s 00:06:45.706 sys 0m5.166s 00:06:45.706 01:00:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.706 01:00:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.706 ************************************ 00:06:45.706 END TEST nvmf_filesystem 00:06:45.706 ************************************ 00:06:45.966 01:00:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:45.966 01:00:01 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:45.966 01:00:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.966 01:00:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.966 01:00:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.966 ************************************ 00:06:45.966 START TEST nvmf_target_discovery 00:06:45.966 ************************************ 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:45.966 * Looking for test storage... 00:06:45.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.966 01:00:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.495 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:48.496 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:48.496 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:48.496 Found net devices under 0000:09:00.0: cvl_0_0 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:48.496 Found net devices under 0000:09:00.1: cvl_0_1 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.496 01:00:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:06:48.496 00:06:48.496 --- 10.0.0.2 ping statistics --- 00:06:48.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.496 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:06:48.496 00:06:48.496 --- 10.0.0.1 ping statistics --- 00:06:48.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.496 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4052305 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4052305 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 4052305 ']' 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.496 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.496 [2024-07-16 01:00:04.193589] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:06:48.496 [2024-07-16 01:00:04.193658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.496 [2024-07-16 01:00:04.255446] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.496 [2024-07-16 01:00:04.366864] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.497 [2024-07-16 01:00:04.366914] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.497 [2024-07-16 01:00:04.366962] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.497 [2024-07-16 01:00:04.366983] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.497 [2024-07-16 01:00:04.367009] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.497 [2024-07-16 01:00:04.367114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.497 [2024-07-16 01:00:04.367144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.497 [2024-07-16 01:00:04.367202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.497 [2024-07-16 01:00:04.367208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 [2024-07-16 01:00:04.524677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 Null1 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 [2024-07-16 01:00:04.564998] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 Null2 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 Null3 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 Null4 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.755 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:06:49.013 00:06:49.013 Discovery Log Number of Records 6, Generation counter 6 00:06:49.013 =====Discovery Log Entry 0====== 00:06:49.013 trtype: tcp 00:06:49.013 adrfam: ipv4 00:06:49.013 subtype: current discovery subsystem 00:06:49.013 treq: not required 00:06:49.013 portid: 0 00:06:49.013 trsvcid: 4420 00:06:49.013 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:49.013 traddr: 10.0.0.2 00:06:49.013 eflags: explicit discovery connections, duplicate discovery information 00:06:49.013 sectype: none 00:06:49.013 =====Discovery Log Entry 1====== 00:06:49.013 trtype: tcp 00:06:49.013 adrfam: ipv4 00:06:49.013 subtype: nvme subsystem 00:06:49.013 treq: not required 00:06:49.013 portid: 0 00:06:49.014 trsvcid: 4420 00:06:49.014 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:49.014 traddr: 10.0.0.2 00:06:49.014 eflags: none 00:06:49.014 sectype: none 00:06:49.014 =====Discovery Log Entry 2====== 00:06:49.014 trtype: tcp 00:06:49.014 adrfam: ipv4 00:06:49.014 subtype: nvme subsystem 00:06:49.014 treq: not required 00:06:49.014 portid: 0 00:06:49.014 trsvcid: 4420 00:06:49.014 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:49.014 traddr: 10.0.0.2 00:06:49.014 eflags: none 00:06:49.014 sectype: none 00:06:49.014 =====Discovery Log Entry 3====== 00:06:49.014 trtype: tcp 00:06:49.014 adrfam: ipv4 00:06:49.014 subtype: nvme subsystem 00:06:49.014 treq: not required 00:06:49.014 portid: 0 00:06:49.014 trsvcid: 4420 00:06:49.014 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:49.014 traddr: 10.0.0.2 00:06:49.014 eflags: none 00:06:49.014 sectype: none 00:06:49.014 =====Discovery Log Entry 4====== 00:06:49.014 trtype: tcp 00:06:49.014 adrfam: ipv4 00:06:49.014 subtype: nvme subsystem 00:06:49.014 treq: not required 00:06:49.014 portid: 0 00:06:49.014 trsvcid: 4420 00:06:49.014 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:49.014 traddr: 10.0.0.2 00:06:49.014 eflags: none 00:06:49.014 sectype: none 00:06:49.014 =====Discovery Log Entry 5====== 00:06:49.014 trtype: tcp 00:06:49.014 adrfam: ipv4 00:06:49.014 subtype: discovery subsystem referral 00:06:49.014 treq: not required 00:06:49.014 portid: 0 00:06:49.014 trsvcid: 4430 00:06:49.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:49.014 traddr: 10.0.0.2 00:06:49.014 eflags: none 00:06:49.014 sectype: none 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:49.014 Perform nvmf subsystem discovery via RPC 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 [ 00:06:49.014 { 00:06:49.014 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:49.014 "subtype": "Discovery", 00:06:49.014 "listen_addresses": [ 00:06:49.014 { 00:06:49.014 "trtype": "TCP", 00:06:49.014 "adrfam": "IPv4", 00:06:49.014 "traddr": "10.0.0.2", 00:06:49.014 "trsvcid": "4420" 00:06:49.014 } 00:06:49.014 ], 00:06:49.014 "allow_any_host": true, 00:06:49.014 "hosts": [] 00:06:49.014 }, 00:06:49.014 { 00:06:49.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:49.014 "subtype": "NVMe", 00:06:49.014 "listen_addresses": [ 00:06:49.014 { 00:06:49.014 "trtype": "TCP", 00:06:49.014 "adrfam": "IPv4", 00:06:49.014 "traddr": "10.0.0.2", 00:06:49.014 "trsvcid": "4420" 00:06:49.014 } 00:06:49.014 ], 00:06:49.014 "allow_any_host": true, 00:06:49.014 "hosts": [], 00:06:49.014 "serial_number": "SPDK00000000000001", 00:06:49.014 "model_number": "SPDK bdev Controller", 00:06:49.014 "max_namespaces": 32, 00:06:49.014 "min_cntlid": 1, 00:06:49.014 "max_cntlid": 65519, 00:06:49.014 "namespaces": [ 00:06:49.014 { 00:06:49.014 "nsid": 1, 00:06:49.014 "bdev_name": "Null1", 00:06:49.014 "name": "Null1", 00:06:49.014 "nguid": "54990E92E0B44DD7A7F3AF95FCE123D3", 00:06:49.014 "uuid": "54990e92-e0b4-4dd7-a7f3-af95fce123d3" 00:06:49.014 } 00:06:49.014 ] 00:06:49.014 }, 00:06:49.014 { 00:06:49.014 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:49.014 "subtype": "NVMe", 00:06:49.014 "listen_addresses": [ 00:06:49.014 { 00:06:49.014 "trtype": "TCP", 00:06:49.014 "adrfam": "IPv4", 00:06:49.014 "traddr": "10.0.0.2", 00:06:49.014 "trsvcid": "4420" 00:06:49.014 } 00:06:49.014 ], 00:06:49.014 "allow_any_host": true, 00:06:49.014 "hosts": [], 00:06:49.014 "serial_number": "SPDK00000000000002", 00:06:49.014 "model_number": "SPDK bdev Controller", 00:06:49.014 "max_namespaces": 32, 00:06:49.014 "min_cntlid": 1, 00:06:49.014 "max_cntlid": 65519, 00:06:49.014 "namespaces": [ 00:06:49.014 { 00:06:49.014 "nsid": 1, 00:06:49.014 "bdev_name": "Null2", 00:06:49.014 "name": "Null2", 00:06:49.014 "nguid": "C78B69DACEA546C48F8FC5025AD8C9B3", 00:06:49.014 "uuid": "c78b69da-cea5-46c4-8f8f-c5025ad8c9b3" 00:06:49.014 } 00:06:49.014 ] 00:06:49.014 }, 00:06:49.014 { 00:06:49.014 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:49.014 "subtype": "NVMe", 00:06:49.014 "listen_addresses": [ 00:06:49.014 { 00:06:49.014 "trtype": "TCP", 00:06:49.014 "adrfam": "IPv4", 00:06:49.014 "traddr": "10.0.0.2", 00:06:49.014 "trsvcid": "4420" 00:06:49.014 } 00:06:49.014 ], 00:06:49.014 "allow_any_host": true, 00:06:49.014 "hosts": [], 00:06:49.014 "serial_number": "SPDK00000000000003", 00:06:49.014 "model_number": "SPDK bdev Controller", 00:06:49.014 "max_namespaces": 32, 00:06:49.014 "min_cntlid": 1, 00:06:49.014 "max_cntlid": 65519, 00:06:49.014 "namespaces": [ 00:06:49.014 { 00:06:49.014 "nsid": 1, 00:06:49.014 "bdev_name": "Null3", 00:06:49.014 "name": "Null3", 00:06:49.014 "nguid": "F20F0826F5184B1B9A6D65D06BB8C300", 00:06:49.014 "uuid": "f20f0826-f518-4b1b-9a6d-65d06bb8c300" 00:06:49.014 } 00:06:49.014 ] 00:06:49.014 }, 00:06:49.014 { 00:06:49.014 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:49.014 "subtype": "NVMe", 00:06:49.014 "listen_addresses": [ 00:06:49.014 { 00:06:49.014 "trtype": "TCP", 00:06:49.014 "adrfam": "IPv4", 00:06:49.014 "traddr": "10.0.0.2", 00:06:49.014 "trsvcid": "4420" 00:06:49.014 } 00:06:49.014 ], 00:06:49.014 "allow_any_host": true, 00:06:49.014 "hosts": [], 00:06:49.014 "serial_number": "SPDK00000000000004", 00:06:49.014 "model_number": "SPDK bdev Controller", 00:06:49.014 "max_namespaces": 32, 00:06:49.014 "min_cntlid": 1, 00:06:49.014 "max_cntlid": 65519, 00:06:49.014 "namespaces": [ 00:06:49.014 { 00:06:49.014 "nsid": 1, 00:06:49.014 "bdev_name": "Null4", 00:06:49.014 "name": "Null4", 00:06:49.014 "nguid": "71FC131BD519407D82145B62E3EE9B54", 00:06:49.014 "uuid": "71fc131b-d519-407d-8214-5b62e3ee9b54" 00:06:49.014 } 00:06:49.014 ] 00:06:49.014 } 00:06:49.014 ] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:49.014 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.015 01:00:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:49.015 rmmod nvme_tcp 00:06:49.273 rmmod nvme_fabrics 00:06:49.273 rmmod nvme_keyring 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4052305 ']' 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4052305 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 4052305 ']' 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 4052305 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4052305 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4052305' 00:06:49.273 killing process with pid 4052305 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 4052305 00:06:49.273 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 4052305 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.530 01:00:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.433 01:00:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:51.433 00:06:51.433 real 0m5.632s 00:06:51.433 user 0m4.582s 00:06:51.433 sys 0m1.994s 00:06:51.433 01:00:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.433 01:00:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.433 ************************************ 00:06:51.433 END TEST nvmf_target_discovery 00:06:51.433 ************************************ 00:06:51.433 01:00:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:51.433 01:00:07 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:51.433 01:00:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.433 01:00:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.433 01:00:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.693 ************************************ 00:06:51.693 START TEST nvmf_referrals 00:06:51.693 ************************************ 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:51.693 * Looking for test storage... 00:06:51.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.693 01:00:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:53.636 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:53.636 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:53.636 Found net devices under 0000:09:00.0: cvl_0_0 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:53.636 Found net devices under 0000:09:00.1: cvl_0_1 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:06:53.636 00:06:53.636 --- 10.0.0.2 ping statistics --- 00:06:53.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.636 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:06:53.636 00:06:53.636 --- 10.0.0.1 ping statistics --- 00:06:53.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.636 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.636 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.897 01:00:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:53.897 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4054792 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4054792 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 4054792 ']' 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.898 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:53.898 [2024-07-16 01:00:09.693357] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:06:53.898 [2024-07-16 01:00:09.693438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.898 [2024-07-16 01:00:09.758793] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.898 [2024-07-16 01:00:09.868194] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.898 [2024-07-16 01:00:09.868265] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.898 [2024-07-16 01:00:09.868285] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.898 [2024-07-16 01:00:09.868302] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.898 [2024-07-16 01:00:09.868316] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.898 [2024-07-16 01:00:09.868399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.898 [2024-07-16 01:00:09.868480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.898 [2024-07-16 01:00:09.868597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.898 [2024-07-16 01:00:09.868605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.165 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.165 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:54.165 01:00:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:54.165 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.165 01:00:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 [2024-07-16 01:00:10.024909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 [2024-07-16 01:00:10.037167] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:54.165 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:54.430 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.695 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:54.696 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:54.966 01:00:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:55.233 01:00:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:55.233 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:55.233 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:55.233 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:55.233 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:55.233 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:55.233 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.233 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:55.496 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.763 rmmod nvme_tcp 00:06:55.763 rmmod nvme_fabrics 00:06:55.763 rmmod nvme_keyring 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4054792 ']' 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4054792 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 4054792 ']' 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 4054792 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4054792 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4054792' 00:06:55.763 killing process with pid 4054792 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 4054792 00:06:55.763 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 4054792 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.028 01:00:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.587 01:00:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.587 00:06:58.587 real 0m6.551s 00:06:58.587 user 0m9.488s 00:06:58.587 sys 0m2.106s 00:06:58.587 01:00:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.587 01:00:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.587 ************************************ 00:06:58.587 END TEST nvmf_referrals 00:06:58.587 ************************************ 00:06:58.587 01:00:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.587 01:00:14 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:58.587 01:00:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.587 01:00:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.587 01:00:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.587 ************************************ 00:06:58.587 START TEST nvmf_connect_disconnect 00:06:58.587 ************************************ 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:58.587 * Looking for test storage... 00:06:58.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.587 01:00:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:00.490 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:00.490 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:00.490 Found net devices under 0000:09:00.0: cvl_0_0 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:00.490 Found net devices under 0000:09:00.1: cvl_0_1 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.490 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:07:00.491 00:07:00.491 --- 10.0.0.2 ping statistics --- 00:07:00.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.491 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:00.491 00:07:00.491 --- 10.0.0.1 ping statistics --- 00:07:00.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.491 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4057090 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4057090 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 4057090 ']' 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.491 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:00.748 [2024-07-16 01:00:16.500132] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:07:00.748 [2024-07-16 01:00:16.500202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.748 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.748 [2024-07-16 01:00:16.560478] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.748 [2024-07-16 01:00:16.670048] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.748 [2024-07-16 01:00:16.670096] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.748 [2024-07-16 01:00:16.670118] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.748 [2024-07-16 01:00:16.670138] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.748 [2024-07-16 01:00:16.670154] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.748 [2024-07-16 01:00:16.670220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.748 [2024-07-16 01:00:16.670283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.748 [2024-07-16 01:00:16.670355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.748 [2024-07-16 01:00:16.670361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.006 [2024-07-16 01:00:16.826819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.006 [2024-07-16 01:00:16.887900] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:01.006 01:00:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:04.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:06.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.154 rmmod nvme_tcp 00:07:15.154 rmmod nvme_fabrics 00:07:15.154 rmmod nvme_keyring 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4057090 ']' 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4057090 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4057090 ']' 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 4057090 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4057090 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4057090' 00:07:15.154 killing process with pid 4057090 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 4057090 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 4057090 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.154 01:00:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.059 01:00:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.059 00:07:17.059 real 0m18.995s 00:07:17.059 user 0m56.731s 00:07:17.059 sys 0m3.370s 00:07:17.059 01:00:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.059 01:00:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 ************************************ 00:07:17.059 END TEST nvmf_connect_disconnect 00:07:17.059 ************************************ 00:07:17.059 01:00:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:17.059 01:00:33 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:17.059 01:00:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.350 01:00:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.350 01:00:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.350 ************************************ 00:07:17.350 START TEST nvmf_multitarget 00:07:17.350 ************************************ 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:17.350 * Looking for test storage... 00:07:17.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.350 01:00:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.351 01:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:19.271 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:19.271 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:19.271 Found net devices under 0000:09:00.0: cvl_0_0 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:19.271 Found net devices under 0000:09:00.1: cvl_0_1 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.271 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:07:19.530 00:07:19.530 --- 10.0.0.2 ping statistics --- 00:07:19.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.530 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:07:19.530 00:07:19.530 --- 10.0.0.1 ping statistics --- 00:07:19.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.530 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4060852 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4060852 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 4060852 ']' 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.530 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:19.530 [2024-07-16 01:00:35.436127] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:07:19.530 [2024-07-16 01:00:35.436214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.530 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.530 [2024-07-16 01:00:35.501827] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.788 [2024-07-16 01:00:35.612290] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.788 [2024-07-16 01:00:35.612364] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.788 [2024-07-16 01:00:35.612385] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.788 [2024-07-16 01:00:35.612403] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.788 [2024-07-16 01:00:35.612416] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.788 [2024-07-16 01:00:35.612505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.788 [2024-07-16 01:00:35.612570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.788 [2024-07-16 01:00:35.612642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.788 [2024-07-16 01:00:35.612637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:19.788 01:00:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:20.046 01:00:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:20.046 01:00:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:20.046 "nvmf_tgt_1" 00:07:20.046 01:00:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:20.304 "nvmf_tgt_2" 00:07:20.304 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:20.304 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:20.304 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:20.304 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:20.561 true 00:07:20.561 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:20.561 true 00:07:20.561 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:20.561 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:20.819 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:20.819 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:20.819 01:00:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:20.819 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.819 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:20.819 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.820 rmmod nvme_tcp 00:07:20.820 rmmod nvme_fabrics 00:07:20.820 rmmod nvme_keyring 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4060852 ']' 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4060852 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 4060852 ']' 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 4060852 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4060852 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4060852' 00:07:20.820 killing process with pid 4060852 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 4060852 00:07:20.820 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 4060852 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.078 01:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.982 01:00:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.982 00:07:22.982 real 0m5.868s 00:07:22.982 user 0m6.514s 00:07:22.982 sys 0m2.031s 00:07:22.982 01:00:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.982 01:00:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:22.982 ************************************ 00:07:22.982 END TEST nvmf_multitarget 00:07:22.982 ************************************ 00:07:22.982 01:00:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:22.982 01:00:38 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:22.982 01:00:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.982 01:00:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.982 01:00:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.240 ************************************ 00:07:23.240 START TEST nvmf_rpc 00:07:23.240 ************************************ 00:07:23.240 01:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:23.240 * Looking for test storage... 00:07:23.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.240 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.241 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.241 01:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.241 01:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.241 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.241 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.241 01:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.241 01:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:25.766 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:25.766 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:25.766 Found net devices under 0000:09:00.0: cvl_0_0 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.766 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:25.766 Found net devices under 0000:09:00.1: cvl_0_1 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:07:25.767 00:07:25.767 --- 10.0.0.2 ping statistics --- 00:07:25.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.767 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:25.767 00:07:25.767 --- 10.0.0.1 ping statistics --- 00:07:25.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.767 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4062953 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4062953 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 4062953 ']' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.767 [2024-07-16 01:00:41.360563] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:07:25.767 [2024-07-16 01:00:41.360648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.767 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.767 [2024-07-16 01:00:41.423004] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.767 [2024-07-16 01:00:41.525308] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.767 [2024-07-16 01:00:41.525362] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.767 [2024-07-16 01:00:41.525375] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.767 [2024-07-16 01:00:41.525387] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.767 [2024-07-16 01:00:41.525397] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.767 [2024-07-16 01:00:41.525529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.767 [2024-07-16 01:00:41.525575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.767 [2024-07-16 01:00:41.525631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.767 [2024-07-16 01:00:41.525635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:25.767 "tick_rate": 2700000000, 00:07:25.767 "poll_groups": [ 00:07:25.767 { 00:07:25.767 "name": "nvmf_tgt_poll_group_000", 00:07:25.767 "admin_qpairs": 0, 00:07:25.767 "io_qpairs": 0, 00:07:25.767 "current_admin_qpairs": 0, 00:07:25.767 "current_io_qpairs": 0, 00:07:25.767 "pending_bdev_io": 0, 00:07:25.767 "completed_nvme_io": 0, 00:07:25.767 "transports": [] 00:07:25.767 }, 00:07:25.767 { 00:07:25.767 "name": "nvmf_tgt_poll_group_001", 00:07:25.767 "admin_qpairs": 0, 00:07:25.767 "io_qpairs": 0, 00:07:25.767 "current_admin_qpairs": 0, 00:07:25.767 "current_io_qpairs": 0, 00:07:25.767 "pending_bdev_io": 0, 00:07:25.767 "completed_nvme_io": 0, 00:07:25.767 "transports": [] 00:07:25.767 }, 00:07:25.767 { 00:07:25.767 "name": "nvmf_tgt_poll_group_002", 00:07:25.767 "admin_qpairs": 0, 00:07:25.767 "io_qpairs": 0, 00:07:25.767 "current_admin_qpairs": 0, 00:07:25.767 "current_io_qpairs": 0, 00:07:25.767 "pending_bdev_io": 0, 00:07:25.767 "completed_nvme_io": 0, 00:07:25.767 "transports": [] 00:07:25.767 }, 00:07:25.767 { 00:07:25.767 "name": "nvmf_tgt_poll_group_003", 00:07:25.767 "admin_qpairs": 0, 00:07:25.767 "io_qpairs": 0, 00:07:25.767 "current_admin_qpairs": 0, 00:07:25.767 "current_io_qpairs": 0, 00:07:25.767 "pending_bdev_io": 0, 00:07:25.767 "completed_nvme_io": 0, 00:07:25.767 "transports": [] 00:07:25.767 } 00:07:25.767 ] 00:07:25.767 }' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:25.767 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 [2024-07-16 01:00:41.782171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:26.025 "tick_rate": 2700000000, 00:07:26.025 "poll_groups": [ 00:07:26.025 { 00:07:26.025 "name": "nvmf_tgt_poll_group_000", 00:07:26.025 "admin_qpairs": 0, 00:07:26.025 "io_qpairs": 0, 00:07:26.025 "current_admin_qpairs": 0, 00:07:26.025 "current_io_qpairs": 0, 00:07:26.025 "pending_bdev_io": 0, 00:07:26.025 "completed_nvme_io": 0, 00:07:26.025 "transports": [ 00:07:26.025 { 00:07:26.025 "trtype": "TCP" 00:07:26.025 } 00:07:26.025 ] 00:07:26.025 }, 00:07:26.025 { 00:07:26.025 "name": "nvmf_tgt_poll_group_001", 00:07:26.025 "admin_qpairs": 0, 00:07:26.025 "io_qpairs": 0, 00:07:26.025 "current_admin_qpairs": 0, 00:07:26.025 "current_io_qpairs": 0, 00:07:26.025 "pending_bdev_io": 0, 00:07:26.025 "completed_nvme_io": 0, 00:07:26.025 "transports": [ 00:07:26.025 { 00:07:26.025 "trtype": "TCP" 00:07:26.025 } 00:07:26.025 ] 00:07:26.025 }, 00:07:26.025 { 00:07:26.025 "name": "nvmf_tgt_poll_group_002", 00:07:26.025 "admin_qpairs": 0, 00:07:26.025 "io_qpairs": 0, 00:07:26.025 "current_admin_qpairs": 0, 00:07:26.025 "current_io_qpairs": 0, 00:07:26.025 "pending_bdev_io": 0, 00:07:26.025 "completed_nvme_io": 0, 00:07:26.025 "transports": [ 00:07:26.025 { 00:07:26.025 "trtype": "TCP" 00:07:26.025 } 00:07:26.025 ] 00:07:26.025 }, 00:07:26.025 { 00:07:26.025 "name": "nvmf_tgt_poll_group_003", 00:07:26.025 "admin_qpairs": 0, 00:07:26.025 "io_qpairs": 0, 00:07:26.025 "current_admin_qpairs": 0, 00:07:26.025 "current_io_qpairs": 0, 00:07:26.025 "pending_bdev_io": 0, 00:07:26.025 "completed_nvme_io": 0, 00:07:26.025 "transports": [ 00:07:26.025 { 00:07:26.025 "trtype": "TCP" 00:07:26.025 } 00:07:26.025 ] 00:07:26.025 } 00:07:26.025 ] 00:07:26.025 }' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 Malloc1 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 [2024-07-16 01:00:41.929609] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:26.025 [2024-07-16 01:00:41.951967] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:26.025 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:26.025 could not add new controller: failed to write to nvme-fabrics device 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.025 01:00:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.587 01:00:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.587 01:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:26.587 01:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.587 01:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:26.587 01:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.110 [2024-07-16 01:00:44.716989] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:29.110 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:29.110 could not add new controller: failed to write to nvme-fabrics device 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.110 01:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.675 01:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.675 01:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:29.675 01:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.675 01:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:29.675 01:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:31.571 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.572 [2024-07-16 01:00:47.530738] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.572 01:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.505 01:00:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.505 01:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.505 01:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.505 01:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.505 01:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.403 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.404 [2024-07-16 01:00:50.341077] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.404 01:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.337 01:00:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.337 01:00:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:35.337 01:00:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.337 01:00:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:35.337 01:00:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 [2024-07-16 01:00:53.132588] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.237 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:37.801 01:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.801 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:37.801 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.801 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:37.801 01:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:40.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.403 [2024-07-16 01:00:55.928907] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.403 01:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:40.661 01:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.661 01:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:40.661 01:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.661 01:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:40.661 01:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:42.557 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:42.557 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:42.557 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 [2024-07-16 01:00:58.700145] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.815 01:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.748 01:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.748 01:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:43.748 01:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.748 01:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:43.748 01:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.640 [2024-07-16 01:01:01.559045] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.640 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 [2024-07-16 01:01:01.607110] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.641 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 [2024-07-16 01:01:01.655293] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 [2024-07-16 01:01:01.703430] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 [2024-07-16 01:01:01.751597] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.898 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:45.899 "tick_rate": 2700000000, 00:07:45.899 "poll_groups": [ 00:07:45.899 { 00:07:45.899 "name": "nvmf_tgt_poll_group_000", 00:07:45.899 "admin_qpairs": 2, 00:07:45.899 "io_qpairs": 84, 00:07:45.899 "current_admin_qpairs": 0, 00:07:45.899 "current_io_qpairs": 0, 00:07:45.899 "pending_bdev_io": 0, 00:07:45.899 "completed_nvme_io": 261, 00:07:45.899 "transports": [ 00:07:45.899 { 00:07:45.899 "trtype": "TCP" 00:07:45.899 } 00:07:45.899 ] 00:07:45.899 }, 00:07:45.899 { 00:07:45.899 "name": "nvmf_tgt_poll_group_001", 00:07:45.899 "admin_qpairs": 2, 00:07:45.899 "io_qpairs": 84, 00:07:45.899 "current_admin_qpairs": 0, 00:07:45.899 "current_io_qpairs": 0, 00:07:45.899 "pending_bdev_io": 0, 00:07:45.899 "completed_nvme_io": 156, 00:07:45.899 "transports": [ 00:07:45.899 { 00:07:45.899 "trtype": "TCP" 00:07:45.899 } 00:07:45.899 ] 00:07:45.899 }, 00:07:45.899 { 00:07:45.899 "name": "nvmf_tgt_poll_group_002", 00:07:45.899 "admin_qpairs": 1, 00:07:45.899 "io_qpairs": 84, 00:07:45.899 "current_admin_qpairs": 0, 00:07:45.899 "current_io_qpairs": 0, 00:07:45.899 "pending_bdev_io": 0, 00:07:45.899 "completed_nvme_io": 86, 00:07:45.899 "transports": [ 00:07:45.899 { 00:07:45.899 "trtype": "TCP" 00:07:45.899 } 00:07:45.899 ] 00:07:45.899 }, 00:07:45.899 { 00:07:45.899 "name": "nvmf_tgt_poll_group_003", 00:07:45.899 "admin_qpairs": 2, 00:07:45.899 "io_qpairs": 84, 00:07:45.899 "current_admin_qpairs": 0, 00:07:45.899 "current_io_qpairs": 0, 00:07:45.899 "pending_bdev_io": 0, 00:07:45.899 "completed_nvme_io": 183, 00:07:45.899 "transports": [ 00:07:45.899 { 00:07:45.899 "trtype": "TCP" 00:07:45.899 } 00:07:45.899 ] 00:07:45.899 } 00:07:45.899 ] 00:07:45.899 }' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.899 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.156 rmmod nvme_tcp 00:07:46.156 rmmod nvme_fabrics 00:07:46.156 rmmod nvme_keyring 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4062953 ']' 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4062953 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 4062953 ']' 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 4062953 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4062953 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.156 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.157 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4062953' 00:07:46.157 killing process with pid 4062953 00:07:46.157 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 4062953 00:07:46.157 01:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 4062953 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.415 01:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.321 01:01:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.321 00:07:48.321 real 0m25.312s 00:07:48.321 user 1m22.162s 00:07:48.321 sys 0m4.090s 00:07:48.321 01:01:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.321 01:01:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.321 ************************************ 00:07:48.321 END TEST nvmf_rpc 00:07:48.321 ************************************ 00:07:48.579 01:01:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.579 01:01:04 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:48.579 01:01:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.579 01:01:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.579 01:01:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.579 ************************************ 00:07:48.579 START TEST nvmf_invalid 00:07:48.579 ************************************ 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:48.579 * Looking for test storage... 00:07:48.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.579 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.580 01:01:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:51.113 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:51.113 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:51.113 Found net devices under 0000:09:00.0: cvl_0_0 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:51.113 Found net devices under 0000:09:00.1: cvl_0_1 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.113 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:07:51.114 00:07:51.114 --- 10.0.0.2 ping statistics --- 00:07:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.114 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:07:51.114 00:07:51.114 --- 10.0.0.1 ping statistics --- 00:07:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.114 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4067456 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4067456 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 4067456 ']' 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.114 01:01:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:51.114 [2024-07-16 01:01:06.776769] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:07:51.114 [2024-07-16 01:01:06.776869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.114 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.114 [2024-07-16 01:01:06.844529] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.114 [2024-07-16 01:01:06.956889] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.114 [2024-07-16 01:01:06.956968] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.114 [2024-07-16 01:01:06.956983] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.114 [2024-07-16 01:01:06.956994] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.114 [2024-07-16 01:01:06.957018] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.114 [2024-07-16 01:01:06.957082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.114 [2024-07-16 01:01:06.957146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.114 [2024-07-16 01:01:06.957123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.114 [2024-07-16 01:01:06.957150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.114 01:01:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.114 01:01:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:51.114 01:01:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.114 01:01:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.114 01:01:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:51.373 01:01:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.373 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:51.373 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode661 00:07:51.631 [2024-07-16 01:01:07.387677] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:51.631 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:51.631 { 00:07:51.631 "nqn": "nqn.2016-06.io.spdk:cnode661", 00:07:51.631 "tgt_name": "foobar", 00:07:51.631 "method": "nvmf_create_subsystem", 00:07:51.631 "req_id": 1 00:07:51.631 } 00:07:51.631 Got JSON-RPC error response 00:07:51.631 response: 00:07:51.631 { 00:07:51.631 "code": -32603, 00:07:51.631 "message": "Unable to find target foobar" 00:07:51.631 }' 00:07:51.631 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:51.631 { 00:07:51.631 "nqn": "nqn.2016-06.io.spdk:cnode661", 00:07:51.631 "tgt_name": "foobar", 00:07:51.631 "method": "nvmf_create_subsystem", 00:07:51.631 "req_id": 1 00:07:51.631 } 00:07:51.631 Got JSON-RPC error response 00:07:51.631 response: 00:07:51.631 { 00:07:51.631 "code": -32603, 00:07:51.631 "message": "Unable to find target foobar" 00:07:51.631 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:51.631 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:51.631 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16456 00:07:51.888 [2024-07-16 01:01:07.640537] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16456: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:51.888 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:51.888 { 00:07:51.888 "nqn": "nqn.2016-06.io.spdk:cnode16456", 00:07:51.888 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:51.888 "method": "nvmf_create_subsystem", 00:07:51.888 "req_id": 1 00:07:51.888 } 00:07:51.888 Got JSON-RPC error response 00:07:51.888 response: 00:07:51.888 { 00:07:51.888 "code": -32602, 00:07:51.888 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:51.888 }' 00:07:51.888 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:51.888 { 00:07:51.888 "nqn": "nqn.2016-06.io.spdk:cnode16456", 00:07:51.888 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:51.888 "method": "nvmf_create_subsystem", 00:07:51.888 "req_id": 1 00:07:51.888 } 00:07:51.888 Got JSON-RPC error response 00:07:51.888 response: 00:07:51.888 { 00:07:51.888 "code": -32602, 00:07:51.888 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:51.888 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:51.888 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:51.888 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22348 00:07:52.146 [2024-07-16 01:01:07.897421] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22348: invalid model number 'SPDK_Controller' 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:52.146 { 00:07:52.146 "nqn": "nqn.2016-06.io.spdk:cnode22348", 00:07:52.146 "model_number": "SPDK_Controller\u001f", 00:07:52.146 "method": "nvmf_create_subsystem", 00:07:52.146 "req_id": 1 00:07:52.146 } 00:07:52.146 Got JSON-RPC error response 00:07:52.146 response: 00:07:52.146 { 00:07:52.146 "code": -32602, 00:07:52.146 "message": "Invalid MN SPDK_Controller\u001f" 00:07:52.146 }' 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:52.146 { 00:07:52.146 "nqn": "nqn.2016-06.io.spdk:cnode22348", 00:07:52.146 "model_number": "SPDK_Controller\u001f", 00:07:52.146 "method": "nvmf_create_subsystem", 00:07:52.146 "req_id": 1 00:07:52.146 } 00:07:52.146 Got JSON-RPC error response 00:07:52.146 response: 00:07:52.146 { 00:07:52.146 "code": -32602, 00:07:52.146 "message": "Invalid MN SPDK_Controller\u001f" 00:07:52.146 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:52.146 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '#Fjc%tt]Ca=!{Q}+vj`J)' 00:07:52.147 01:01:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '#Fjc%tt]Ca=!{Q}+vj`J)' nqn.2016-06.io.spdk:cnode14314 00:07:52.406 [2024-07-16 01:01:08.226542] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14314: invalid serial number '#Fjc%tt]Ca=!{Q}+vj`J)' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:52.406 { 00:07:52.406 "nqn": "nqn.2016-06.io.spdk:cnode14314", 00:07:52.406 "serial_number": "#Fjc%tt]Ca=!{Q}+vj`J)", 00:07:52.406 "method": "nvmf_create_subsystem", 00:07:52.406 "req_id": 1 00:07:52.406 } 00:07:52.406 Got JSON-RPC error response 00:07:52.406 response: 00:07:52.406 { 00:07:52.406 "code": -32602, 00:07:52.406 "message": "Invalid SN #Fjc%tt]Ca=!{Q}+vj`J)" 00:07:52.406 }' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:52.406 { 00:07:52.406 "nqn": "nqn.2016-06.io.spdk:cnode14314", 00:07:52.406 "serial_number": "#Fjc%tt]Ca=!{Q}+vj`J)", 00:07:52.406 "method": "nvmf_create_subsystem", 00:07:52.406 "req_id": 1 00:07:52.406 } 00:07:52.406 Got JSON-RPC error response 00:07:52.406 response: 00:07:52.406 { 00:07:52.406 "code": -32602, 00:07:52.406 "message": "Invalid SN #Fjc%tt]Ca=!{Q}+vj`J)" 00:07:52.406 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.406 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:52.407 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ']SxWN[>S:3#Q{NRCF/U9}[8PEMk_KrCewVU4t}!kV' 00:07:52.408 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']SxWN[>S:3#Q{NRCF/U9}[8PEMk_KrCewVU4t}!kV' nqn.2016-06.io.spdk:cnode29665 00:07:52.665 [2024-07-16 01:01:08.619821] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29665: invalid model number ']SxWN[>S:3#Q{NRCF/U9}[8PEMk_KrCewVU4t}!kV' 00:07:52.665 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:52.665 { 00:07:52.665 "nqn": "nqn.2016-06.io.spdk:cnode29665", 00:07:52.665 "model_number": "]SxWN[>S:3#Q{NRCF/U9}[8PEMk_KrCewVU4t}!kV", 00:07:52.665 "method": "nvmf_create_subsystem", 00:07:52.666 "req_id": 1 00:07:52.666 } 00:07:52.666 Got JSON-RPC error response 00:07:52.666 response: 00:07:52.666 { 00:07:52.666 "code": -32602, 00:07:52.666 "message": "Invalid MN ]SxWN[>S:3#Q{NRCF/U9}[8PEMk_KrCewVU4t}!kV" 00:07:52.666 }' 00:07:52.666 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:52.666 { 00:07:52.666 "nqn": "nqn.2016-06.io.spdk:cnode29665", 00:07:52.666 "model_number": "]SxWN[>S:3#Q{NRCF/U9}[8PEMk_KrCewVU4t}!kV", 00:07:52.666 "method": "nvmf_create_subsystem", 00:07:52.666 "req_id": 1 00:07:52.666 } 00:07:52.666 Got JSON-RPC error response 00:07:52.666 response: 00:07:52.666 { 00:07:52.666 "code": -32602, 00:07:52.666 "message": "Invalid MN ]SxWN[>S:3#Q{NRCF/U9}[8PEMk_KrCewVU4t}!kV" 00:07:52.666 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:52.666 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:52.923 [2024-07-16 01:01:08.868722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.923 01:01:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:53.181 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:53.181 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:53.181 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:53.181 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:53.181 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:53.439 [2024-07-16 01:01:09.366457] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:53.439 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:53.439 { 00:07:53.439 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:53.439 "listen_address": { 00:07:53.439 "trtype": "tcp", 00:07:53.439 "traddr": "", 00:07:53.439 "trsvcid": "4421" 00:07:53.439 }, 00:07:53.439 "method": "nvmf_subsystem_remove_listener", 00:07:53.439 "req_id": 1 00:07:53.439 } 00:07:53.439 Got JSON-RPC error response 00:07:53.439 response: 00:07:53.439 { 00:07:53.439 "code": -32602, 00:07:53.439 "message": "Invalid parameters" 00:07:53.439 }' 00:07:53.439 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:53.439 { 00:07:53.439 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:53.439 "listen_address": { 00:07:53.439 "trtype": "tcp", 00:07:53.439 "traddr": "", 00:07:53.439 "trsvcid": "4421" 00:07:53.439 }, 00:07:53.439 "method": "nvmf_subsystem_remove_listener", 00:07:53.439 "req_id": 1 00:07:53.439 } 00:07:53.439 Got JSON-RPC error response 00:07:53.439 response: 00:07:53.439 { 00:07:53.439 "code": -32602, 00:07:53.439 "message": "Invalid parameters" 00:07:53.439 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:53.439 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3065 -i 0 00:07:53.697 [2024-07-16 01:01:09.619263] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3065: invalid cntlid range [0-65519] 00:07:53.697 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:53.697 { 00:07:53.697 "nqn": "nqn.2016-06.io.spdk:cnode3065", 00:07:53.697 "min_cntlid": 0, 00:07:53.697 "method": "nvmf_create_subsystem", 00:07:53.697 "req_id": 1 00:07:53.697 } 00:07:53.697 Got JSON-RPC error response 00:07:53.697 response: 00:07:53.697 { 00:07:53.697 "code": -32602, 00:07:53.697 "message": "Invalid cntlid range [0-65519]" 00:07:53.697 }' 00:07:53.697 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:53.697 { 00:07:53.697 "nqn": "nqn.2016-06.io.spdk:cnode3065", 00:07:53.697 "min_cntlid": 0, 00:07:53.697 "method": "nvmf_create_subsystem", 00:07:53.697 "req_id": 1 00:07:53.697 } 00:07:53.697 Got JSON-RPC error response 00:07:53.697 response: 00:07:53.697 { 00:07:53.697 "code": -32602, 00:07:53.697 "message": "Invalid cntlid range [0-65519]" 00:07:53.697 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:53.697 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11359 -i 65520 00:07:53.954 [2024-07-16 01:01:09.868127] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11359: invalid cntlid range [65520-65519] 00:07:53.954 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:53.954 { 00:07:53.954 "nqn": "nqn.2016-06.io.spdk:cnode11359", 00:07:53.954 "min_cntlid": 65520, 00:07:53.954 "method": "nvmf_create_subsystem", 00:07:53.954 "req_id": 1 00:07:53.954 } 00:07:53.954 Got JSON-RPC error response 00:07:53.954 response: 00:07:53.954 { 00:07:53.954 "code": -32602, 00:07:53.954 "message": "Invalid cntlid range [65520-65519]" 00:07:53.954 }' 00:07:53.954 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:53.954 { 00:07:53.954 "nqn": "nqn.2016-06.io.spdk:cnode11359", 00:07:53.954 "min_cntlid": 65520, 00:07:53.954 "method": "nvmf_create_subsystem", 00:07:53.954 "req_id": 1 00:07:53.954 } 00:07:53.954 Got JSON-RPC error response 00:07:53.954 response: 00:07:53.954 { 00:07:53.954 "code": -32602, 00:07:53.954 "message": "Invalid cntlid range [65520-65519]" 00:07:53.954 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:53.954 01:01:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22859 -I 0 00:07:54.212 [2024-07-16 01:01:10.125040] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22859: invalid cntlid range [1-0] 00:07:54.212 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:54.212 { 00:07:54.212 "nqn": "nqn.2016-06.io.spdk:cnode22859", 00:07:54.212 "max_cntlid": 0, 00:07:54.212 "method": "nvmf_create_subsystem", 00:07:54.212 "req_id": 1 00:07:54.212 } 00:07:54.212 Got JSON-RPC error response 00:07:54.212 response: 00:07:54.212 { 00:07:54.212 "code": -32602, 00:07:54.212 "message": "Invalid cntlid range [1-0]" 00:07:54.212 }' 00:07:54.212 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:54.212 { 00:07:54.212 "nqn": "nqn.2016-06.io.spdk:cnode22859", 00:07:54.212 "max_cntlid": 0, 00:07:54.212 "method": "nvmf_create_subsystem", 00:07:54.212 "req_id": 1 00:07:54.212 } 00:07:54.212 Got JSON-RPC error response 00:07:54.212 response: 00:07:54.212 { 00:07:54.212 "code": -32602, 00:07:54.212 "message": "Invalid cntlid range [1-0]" 00:07:54.212 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:54.212 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4113 -I 65520 00:07:54.470 [2024-07-16 01:01:10.369753] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4113: invalid cntlid range [1-65520] 00:07:54.470 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:54.470 { 00:07:54.470 "nqn": "nqn.2016-06.io.spdk:cnode4113", 00:07:54.470 "max_cntlid": 65520, 00:07:54.470 "method": "nvmf_create_subsystem", 00:07:54.470 "req_id": 1 00:07:54.470 } 00:07:54.470 Got JSON-RPC error response 00:07:54.470 response: 00:07:54.470 { 00:07:54.470 "code": -32602, 00:07:54.470 "message": "Invalid cntlid range [1-65520]" 00:07:54.470 }' 00:07:54.470 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:54.470 { 00:07:54.470 "nqn": "nqn.2016-06.io.spdk:cnode4113", 00:07:54.470 "max_cntlid": 65520, 00:07:54.470 "method": "nvmf_create_subsystem", 00:07:54.470 "req_id": 1 00:07:54.470 } 00:07:54.470 Got JSON-RPC error response 00:07:54.470 response: 00:07:54.470 { 00:07:54.470 "code": -32602, 00:07:54.470 "message": "Invalid cntlid range [1-65520]" 00:07:54.470 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:54.470 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27248 -i 6 -I 5 00:07:54.728 [2024-07-16 01:01:10.634638] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27248: invalid cntlid range [6-5] 00:07:54.728 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:54.728 { 00:07:54.728 "nqn": "nqn.2016-06.io.spdk:cnode27248", 00:07:54.728 "min_cntlid": 6, 00:07:54.728 "max_cntlid": 5, 00:07:54.728 "method": "nvmf_create_subsystem", 00:07:54.728 "req_id": 1 00:07:54.728 } 00:07:54.728 Got JSON-RPC error response 00:07:54.728 response: 00:07:54.728 { 00:07:54.728 "code": -32602, 00:07:54.728 "message": "Invalid cntlid range [6-5]" 00:07:54.728 }' 00:07:54.728 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:54.728 { 00:07:54.728 "nqn": "nqn.2016-06.io.spdk:cnode27248", 00:07:54.728 "min_cntlid": 6, 00:07:54.728 "max_cntlid": 5, 00:07:54.728 "method": "nvmf_create_subsystem", 00:07:54.728 "req_id": 1 00:07:54.728 } 00:07:54.728 Got JSON-RPC error response 00:07:54.728 response: 00:07:54.728 { 00:07:54.728 "code": -32602, 00:07:54.728 "message": "Invalid cntlid range [6-5]" 00:07:54.728 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:54.728 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:54.986 { 00:07:54.986 "name": "foobar", 00:07:54.986 "method": "nvmf_delete_target", 00:07:54.986 "req_id": 1 00:07:54.986 } 00:07:54.986 Got JSON-RPC error response 00:07:54.986 response: 00:07:54.986 { 00:07:54.986 "code": -32602, 00:07:54.986 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:54.986 }' 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:54.986 { 00:07:54.986 "name": "foobar", 00:07:54.986 "method": "nvmf_delete_target", 00:07:54.986 "req_id": 1 00:07:54.986 } 00:07:54.986 Got JSON-RPC error response 00:07:54.986 response: 00:07:54.986 { 00:07:54.986 "code": -32602, 00:07:54.986 "message": "The specified target doesn't exist, cannot delete it." 00:07:54.986 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.986 rmmod nvme_tcp 00:07:54.986 rmmod nvme_fabrics 00:07:54.986 rmmod nvme_keyring 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 4067456 ']' 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 4067456 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 4067456 ']' 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 4067456 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4067456 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4067456' 00:07:54.986 killing process with pid 4067456 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 4067456 00:07:54.986 01:01:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 4067456 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.243 01:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.779 01:01:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.779 00:07:57.779 real 0m8.803s 00:07:57.779 user 0m20.204s 00:07:57.779 sys 0m2.531s 00:07:57.779 01:01:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.779 01:01:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:57.779 ************************************ 00:07:57.779 END TEST nvmf_invalid 00:07:57.779 ************************************ 00:07:57.779 01:01:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:57.779 01:01:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:57.779 01:01:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.779 01:01:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.779 01:01:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.779 ************************************ 00:07:57.779 START TEST nvmf_abort 00:07:57.779 ************************************ 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:57.779 * Looking for test storage... 00:07:57.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.779 01:01:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:59.682 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:59.682 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:59.682 Found net devices under 0000:09:00.0: cvl_0_0 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:59.682 Found net devices under 0000:09:00.1: cvl_0_1 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.682 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:59.683 00:07:59.683 --- 10.0.0.2 ping statistics --- 00:07:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.683 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:59.683 00:07:59.683 --- 10.0.0.1 ping statistics --- 00:07:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.683 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4070090 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4070090 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 4070090 ']' 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.683 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:59.942 [2024-07-16 01:01:15.698717] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:07:59.942 [2024-07-16 01:01:15.698796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.942 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.942 [2024-07-16 01:01:15.762987] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.942 [2024-07-16 01:01:15.872438] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.942 [2024-07-16 01:01:15.872491] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.942 [2024-07-16 01:01:15.872519] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.942 [2024-07-16 01:01:15.872531] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.942 [2024-07-16 01:01:15.872540] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.942 [2024-07-16 01:01:15.872623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.942 [2024-07-16 01:01:15.872685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.942 [2024-07-16 01:01:15.872689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.200 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.200 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:00.200 01:01:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.200 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.200 01:01:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 [2024-07-16 01:01:16.007903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 Malloc0 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 Delay0 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 [2024-07-16 01:01:16.070357] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.200 01:01:16 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:00.200 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.477 [2024-07-16 01:01:16.208116] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:02.395 Initializing NVMe Controllers 00:08:02.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:02.395 controller IO queue size 128 less than required 00:08:02.395 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:02.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:02.395 Initialization complete. Launching workers. 00:08:02.395 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33906 00:08:02.395 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33971, failed to submit 62 00:08:02.395 success 33910, unsuccess 61, failed 0 00:08:02.395 01:01:18 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:02.395 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.395 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:02.395 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.395 01:01:18 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:02.395 01:01:18 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.396 rmmod nvme_tcp 00:08:02.396 rmmod nvme_fabrics 00:08:02.396 rmmod nvme_keyring 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4070090 ']' 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4070090 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 4070090 ']' 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 4070090 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4070090 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4070090' 00:08:02.396 killing process with pid 4070090 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 4070090 00:08:02.396 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 4070090 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.964 01:01:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.872 01:01:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.872 00:08:04.872 real 0m7.488s 00:08:04.872 user 0m10.672s 00:08:04.872 sys 0m2.604s 00:08:04.872 01:01:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.872 01:01:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.872 ************************************ 00:08:04.872 END TEST nvmf_abort 00:08:04.872 ************************************ 00:08:04.872 01:01:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:04.872 01:01:20 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:04.872 01:01:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.872 01:01:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.872 01:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.872 ************************************ 00:08:04.872 START TEST nvmf_ns_hotplug_stress 00:08:04.872 ************************************ 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:04.872 * Looking for test storage... 00:08:04.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:04.872 01:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:07.399 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:07.399 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:07.399 Found net devices under 0000:09:00.0: cvl_0_0 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:07.399 Found net devices under 0000:09:00.1: cvl_0_1 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.399 01:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.399 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.399 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.399 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:08:07.399 00:08:07.399 --- 10.0.0.2 ping statistics --- 00:08:07.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.399 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:07.399 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:07.399 00:08:07.399 --- 10.0.0.1 ping statistics --- 00:08:07.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.399 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:07.399 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.399 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:07.399 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4072380 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4072380 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 4072380 ']' 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.400 [2024-07-16 01:01:23.099816] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:08:07.400 [2024-07-16 01:01:23.099902] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.400 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.400 [2024-07-16 01:01:23.162306] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.400 [2024-07-16 01:01:23.262338] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.400 [2024-07-16 01:01:23.262397] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.400 [2024-07-16 01:01:23.262426] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.400 [2024-07-16 01:01:23.262437] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.400 [2024-07-16 01:01:23.262447] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.400 [2024-07-16 01:01:23.262531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.400 [2024-07-16 01:01:23.262597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.400 [2024-07-16 01:01:23.262600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.400 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.657 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.657 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:07.657 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.657 [2024-07-16 01:01:23.634561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.914 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:07.914 01:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.171 [2024-07-16 01:01:24.137174] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.171 01:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.428 01:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:08.685 Malloc0 00:08:08.685 01:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:08.941 Delay0 00:08:08.941 01:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.198 01:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:09.455 NULL1 00:08:09.455 01:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:09.712 01:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4072731 00:08:09.712 01:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:09.712 01:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:09.712 01:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.712 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.085 Read completed with error (sct=0, sc=11) 00:08:11.085 01:01:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.342 01:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:11.342 01:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:11.342 true 00:08:11.342 01:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:11.342 01:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.274 01:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.532 01:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:12.532 01:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:12.789 true 00:08:12.789 01:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:12.789 01:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.045 01:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.302 01:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:13.302 01:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:13.558 true 00:08:13.558 01:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:13.558 01:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.813 01:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.070 01:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:14.070 01:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:14.326 true 00:08:14.326 01:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:14.326 01:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.254 01:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.817 01:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:15.817 01:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:15.817 true 00:08:16.074 01:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:16.074 01:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.331 01:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.331 01:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:16.331 01:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:16.588 true 00:08:16.588 01:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:16.588 01:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.845 01:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.102 01:01:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:17.102 01:01:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:17.359 true 00:08:17.359 01:01:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:17.359 01:01:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.728 01:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.728 01:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:18.728 01:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:18.986 true 00:08:18.986 01:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:18.986 01:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.243 01:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.500 01:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:19.500 01:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:19.758 true 00:08:19.758 01:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:19.758 01:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.731 01:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.987 01:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:20.987 01:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:21.242 true 00:08:21.242 01:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:21.242 01:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.498 01:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.755 01:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:21.755 01:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:22.011 true 00:08:22.011 01:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:22.011 01:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.940 01:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.940 01:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:22.940 01:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:23.197 true 00:08:23.197 01:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:23.197 01:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.453 01:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.710 01:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:23.710 01:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:23.967 true 00:08:23.967 01:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:23.967 01:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.900 01:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.157 01:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:25.157 01:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:25.157 true 00:08:25.414 01:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:25.414 01:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.414 01:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.671 01:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:25.671 01:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:25.929 true 00:08:25.929 01:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:25.929 01:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.861 01:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.429 01:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:27.429 01:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:27.429 true 00:08:27.429 01:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:27.429 01:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.689 01:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.946 01:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:27.946 01:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:28.203 true 00:08:28.203 01:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:28.203 01:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.136 01:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.394 01:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:29.394 01:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:29.651 true 00:08:29.651 01:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:29.651 01:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.908 01:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.166 01:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:30.166 01:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:30.423 true 00:08:30.423 01:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:30.423 01:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.355 01:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.613 01:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:31.613 01:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:31.897 true 00:08:31.897 01:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:31.897 01:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.154 01:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.411 01:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:32.411 01:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:32.669 true 00:08:32.669 01:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:32.669 01:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.602 01:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.602 01:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:33.602 01:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:33.859 true 00:08:33.859 01:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:33.859 01:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.116 01:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.372 01:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:34.372 01:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:34.629 true 00:08:34.629 01:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:34.629 01:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.562 01:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.820 01:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:35.820 01:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:36.078 true 00:08:36.078 01:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:36.078 01:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.336 01:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.593 01:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:36.593 01:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:36.852 true 00:08:36.852 01:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:36.852 01:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.818 01:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.075 01:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:38.075 01:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:38.333 true 00:08:38.333 01:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:38.333 01:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.591 01:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.848 01:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:38.848 01:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:39.105 true 00:08:39.105 01:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:39.105 01:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.363 01:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.620 01:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:39.620 01:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:39.877 true 00:08:39.877 01:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:39.877 01:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.809 Initializing NVMe Controllers 00:08:40.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:40.809 Controller IO queue size 128, less than required. 00:08:40.809 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.810 Controller IO queue size 128, less than required. 00:08:40.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:40.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:40.810 Initialization complete. Launching workers. 00:08:40.810 ======================================================== 00:08:40.810 Latency(us) 00:08:40.810 Device Information : IOPS MiB/s Average min max 00:08:40.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 862.63 0.42 77476.27 2252.65 1037520.98 00:08:40.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10471.76 5.11 12223.67 3478.71 539425.81 00:08:40.810 ======================================================== 00:08:40.810 Total : 11334.39 5.53 17189.86 2252.65 1037520.98 00:08:40.810 00:08:40.810 01:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.068 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:41.068 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:41.325 true 00:08:41.325 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4072731 00:08:41.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4072731) - No such process 00:08:41.325 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4072731 00:08:41.325 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.581 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.838 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:41.838 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:41.838 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:41.838 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:41.838 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:42.095 null0 00:08:42.095 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.095 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.095 01:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:42.351 null1 00:08:42.351 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.351 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.351 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:42.607 null2 00:08:42.607 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.607 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.608 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:42.864 null3 00:08:42.864 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.864 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.864 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:43.165 null4 00:08:43.165 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.165 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.165 01:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:43.426 null5 00:08:43.426 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.426 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.426 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:43.682 null6 00:08:43.682 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.682 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.682 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:43.940 null7 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4076797 4076798 4076800 4076802 4076804 4076806 4076808 4076810 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.941 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:44.198 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.198 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:44.198 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:44.198 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:44.198 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:44.198 01:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:44.198 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.198 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.455 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.713 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.972 01:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.230 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.230 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:45.230 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:45.230 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.230 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:45.231 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.231 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.231 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.489 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.747 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.014 01:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:46.272 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.529 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.530 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.530 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.530 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.530 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.787 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.045 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.045 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.312 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.312 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.313 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.313 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.313 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.313 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.313 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.313 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.574 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.832 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.832 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.089 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.089 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.089 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.089 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.089 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.089 01:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.347 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.605 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.863 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.121 01:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.379 rmmod nvme_tcp 00:08:49.379 rmmod nvme_fabrics 00:08:49.379 rmmod nvme_keyring 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4072380 ']' 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4072380 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 4072380 ']' 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 4072380 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4072380 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4072380' 00:08:49.379 killing process with pid 4072380 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 4072380 00:08:49.379 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 4072380 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.639 01:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.175 01:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.175 00:08:52.175 real 0m46.870s 00:08:52.175 user 3m33.796s 00:08:52.175 sys 0m16.154s 00:08:52.175 01:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.175 01:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.175 ************************************ 00:08:52.175 END TEST nvmf_ns_hotplug_stress 00:08:52.175 ************************************ 00:08:52.175 01:02:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.175 01:02:07 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:52.175 01:02:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.175 01:02:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.175 01:02:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.175 ************************************ 00:08:52.175 START TEST nvmf_connect_stress 00:08:52.175 ************************************ 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:52.175 * Looking for test storage... 00:08:52.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.175 01:02:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.176 01:02:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:54.077 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:54.078 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:54.078 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:54.078 Found net devices under 0000:09:00.0: cvl_0_0 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:54.078 Found net devices under 0000:09:00.1: cvl_0_1 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:08:54.078 00:08:54.078 --- 10.0.0.2 ping statistics --- 00:08:54.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.078 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:54.078 00:08:54.078 --- 10.0.0.1 ping statistics --- 00:08:54.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.078 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.078 01:02:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4079609 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4079609 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 4079609 ']' 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.078 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.336 [2024-07-16 01:02:10.084680] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:08:54.336 [2024-07-16 01:02:10.084779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.336 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.336 [2024-07-16 01:02:10.151306] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.336 [2024-07-16 01:02:10.261734] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.336 [2024-07-16 01:02:10.261799] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.336 [2024-07-16 01:02:10.261828] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.336 [2024-07-16 01:02:10.261840] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.336 [2024-07-16 01:02:10.261850] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.336 [2024-07-16 01:02:10.261940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.336 [2024-07-16 01:02:10.262013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.336 [2024-07-16 01:02:10.262017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 [2024-07-16 01:02:10.409053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 [2024-07-16 01:02:10.443133] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 NULL1 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4079702 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:54.593 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.594 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.851 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.851 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:54.851 01:02:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.851 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.851 01:02:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.444 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.444 01:02:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:55.444 01:02:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.444 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.444 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.701 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.701 01:02:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:55.701 01:02:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.701 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.701 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.959 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.959 01:02:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:55.959 01:02:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.959 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.959 01:02:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.216 01:02:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:56.216 01:02:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.216 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.216 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.475 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.475 01:02:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:56.475 01:02:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.475 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.475 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.044 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.045 01:02:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:57.045 01:02:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.045 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.045 01:02:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.301 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.301 01:02:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:57.301 01:02:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.301 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.301 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.557 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.557 01:02:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:57.557 01:02:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.557 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.557 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.812 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.812 01:02:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:57.812 01:02:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.812 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.812 01:02:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.069 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.069 01:02:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:58.069 01:02:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.069 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.069 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.694 01:02:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.259 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.259 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:59.259 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.259 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.259 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.516 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.516 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:59.516 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.516 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.516 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.773 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.773 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:08:59.773 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.774 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.774 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.031 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.031 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:00.031 01:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.031 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.031 01:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.595 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.595 01:02:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:00.595 01:02:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.595 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.595 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.852 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.852 01:02:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:00.853 01:02:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.853 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.853 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.110 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.110 01:02:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:01.110 01:02:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.110 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.110 01:02:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.368 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.368 01:02:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:01.368 01:02:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.368 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.369 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.626 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.626 01:02:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:01.626 01:02:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.626 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.626 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.192 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.192 01:02:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:02.192 01:02:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.192 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.192 01:02:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.449 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.449 01:02:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:02.449 01:02:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.449 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.449 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.707 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.707 01:02:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:02.707 01:02:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.707 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.707 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.964 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.964 01:02:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:02.965 01:02:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.965 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.965 01:02:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.222 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.222 01:02:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:03.222 01:02:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.222 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.222 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.787 01:02:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:03.787 01:02:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.787 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.787 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.045 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.045 01:02:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:04.045 01:02:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.045 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.045 01:02:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.303 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.303 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:04.303 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.303 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.303 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.561 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.561 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:04.561 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.561 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.561 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.561 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:04.818 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4079702 00:09:04.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4079702) - No such process 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4079702 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.819 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.819 rmmod nvme_tcp 00:09:04.819 rmmod nvme_fabrics 00:09:05.077 rmmod nvme_keyring 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4079609 ']' 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4079609 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 4079609 ']' 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 4079609 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4079609 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4079609' 00:09:05.077 killing process with pid 4079609 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 4079609 00:09:05.077 01:02:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 4079609 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.337 01:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.246 01:02:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.246 00:09:07.246 real 0m15.478s 00:09:07.246 user 0m38.476s 00:09:07.246 sys 0m5.906s 00:09:07.246 01:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.246 01:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.246 ************************************ 00:09:07.246 END TEST nvmf_connect_stress 00:09:07.246 ************************************ 00:09:07.246 01:02:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:07.246 01:02:23 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:07.246 01:02:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:07.246 01:02:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.246 01:02:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:07.246 ************************************ 00:09:07.246 START TEST nvmf_fused_ordering 00:09:07.246 ************************************ 00:09:07.246 01:02:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:07.504 * Looking for test storage... 00:09:07.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.504 01:02:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.407 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:09.408 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:09.408 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:09.408 Found net devices under 0000:09:00.0: cvl_0_0 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:09.408 Found net devices under 0000:09:00.1: cvl_0_1 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.408 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:09:09.667 00:09:09.667 --- 10.0.0.2 ping statistics --- 00:09:09.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.667 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:09:09.667 00:09:09.667 --- 10.0.0.1 ping statistics --- 00:09:09.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.667 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4082857 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4082857 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4082857 ']' 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.667 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:09.667 [2024-07-16 01:02:25.603655] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:09:09.667 [2024-07-16 01:02:25.603750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.667 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.925 [2024-07-16 01:02:25.666873] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.925 [2024-07-16 01:02:25.773864] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.925 [2024-07-16 01:02:25.773916] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.925 [2024-07-16 01:02:25.773938] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.925 [2024-07-16 01:02:25.773949] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.925 [2024-07-16 01:02:25.773966] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.925 [2024-07-16 01:02:25.774011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:09.925 [2024-07-16 01:02:25.911589] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.925 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:10.184 [2024-07-16 01:02:25.927771] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:10.184 NULL1 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.184 01:02:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:10.184 [2024-07-16 01:02:25.972049] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:09:10.184 [2024-07-16 01:02:25.972090] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082966 ] 00:09:10.184 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.748 Attached to nqn.2016-06.io.spdk:cnode1 00:09:10.748 Namespace ID: 1 size: 1GB 00:09:10.748 fused_ordering(0) 00:09:10.748 fused_ordering(1) 00:09:10.748 fused_ordering(2) 00:09:10.748 fused_ordering(3) 00:09:10.748 fused_ordering(4) 00:09:10.748 fused_ordering(5) 00:09:10.748 fused_ordering(6) 00:09:10.748 fused_ordering(7) 00:09:10.748 fused_ordering(8) 00:09:10.748 fused_ordering(9) 00:09:10.748 fused_ordering(10) 00:09:10.748 fused_ordering(11) 00:09:10.748 fused_ordering(12) 00:09:10.748 fused_ordering(13) 00:09:10.748 fused_ordering(14) 00:09:10.748 fused_ordering(15) 00:09:10.748 fused_ordering(16) 00:09:10.748 fused_ordering(17) 00:09:10.748 fused_ordering(18) 00:09:10.748 fused_ordering(19) 00:09:10.748 fused_ordering(20) 00:09:10.748 fused_ordering(21) 00:09:10.748 fused_ordering(22) 00:09:10.748 fused_ordering(23) 00:09:10.748 fused_ordering(24) 00:09:10.748 fused_ordering(25) 00:09:10.748 fused_ordering(26) 00:09:10.748 fused_ordering(27) 00:09:10.748 fused_ordering(28) 00:09:10.748 fused_ordering(29) 00:09:10.748 fused_ordering(30) 00:09:10.748 fused_ordering(31) 00:09:10.748 fused_ordering(32) 00:09:10.748 fused_ordering(33) 00:09:10.748 fused_ordering(34) 00:09:10.748 fused_ordering(35) 00:09:10.748 fused_ordering(36) 00:09:10.748 fused_ordering(37) 00:09:10.748 fused_ordering(38) 00:09:10.748 fused_ordering(39) 00:09:10.748 fused_ordering(40) 00:09:10.748 fused_ordering(41) 00:09:10.748 fused_ordering(42) 00:09:10.748 fused_ordering(43) 00:09:10.748 fused_ordering(44) 00:09:10.748 fused_ordering(45) 00:09:10.748 fused_ordering(46) 00:09:10.748 fused_ordering(47) 00:09:10.748 fused_ordering(48) 00:09:10.748 fused_ordering(49) 00:09:10.748 fused_ordering(50) 00:09:10.748 fused_ordering(51) 00:09:10.748 fused_ordering(52) 00:09:10.748 fused_ordering(53) 00:09:10.748 fused_ordering(54) 00:09:10.748 fused_ordering(55) 00:09:10.748 fused_ordering(56) 00:09:10.748 fused_ordering(57) 00:09:10.748 fused_ordering(58) 00:09:10.748 fused_ordering(59) 00:09:10.748 fused_ordering(60) 00:09:10.748 fused_ordering(61) 00:09:10.748 fused_ordering(62) 00:09:10.748 fused_ordering(63) 00:09:10.748 fused_ordering(64) 00:09:10.748 fused_ordering(65) 00:09:10.748 fused_ordering(66) 00:09:10.748 fused_ordering(67) 00:09:10.748 fused_ordering(68) 00:09:10.748 fused_ordering(69) 00:09:10.748 fused_ordering(70) 00:09:10.748 fused_ordering(71) 00:09:10.748 fused_ordering(72) 00:09:10.748 fused_ordering(73) 00:09:10.748 fused_ordering(74) 00:09:10.748 fused_ordering(75) 00:09:10.748 fused_ordering(76) 00:09:10.748 fused_ordering(77) 00:09:10.748 fused_ordering(78) 00:09:10.748 fused_ordering(79) 00:09:10.748 fused_ordering(80) 00:09:10.748 fused_ordering(81) 00:09:10.748 fused_ordering(82) 00:09:10.748 fused_ordering(83) 00:09:10.748 fused_ordering(84) 00:09:10.748 fused_ordering(85) 00:09:10.748 fused_ordering(86) 00:09:10.748 fused_ordering(87) 00:09:10.748 fused_ordering(88) 00:09:10.748 fused_ordering(89) 00:09:10.748 fused_ordering(90) 00:09:10.748 fused_ordering(91) 00:09:10.748 fused_ordering(92) 00:09:10.748 fused_ordering(93) 00:09:10.748 fused_ordering(94) 00:09:10.748 fused_ordering(95) 00:09:10.748 fused_ordering(96) 00:09:10.748 fused_ordering(97) 00:09:10.748 fused_ordering(98) 00:09:10.748 fused_ordering(99) 00:09:10.748 fused_ordering(100) 00:09:10.748 fused_ordering(101) 00:09:10.748 fused_ordering(102) 00:09:10.748 fused_ordering(103) 00:09:10.748 fused_ordering(104) 00:09:10.748 fused_ordering(105) 00:09:10.748 fused_ordering(106) 00:09:10.748 fused_ordering(107) 00:09:10.748 fused_ordering(108) 00:09:10.748 fused_ordering(109) 00:09:10.748 fused_ordering(110) 00:09:10.748 fused_ordering(111) 00:09:10.748 fused_ordering(112) 00:09:10.748 fused_ordering(113) 00:09:10.748 fused_ordering(114) 00:09:10.748 fused_ordering(115) 00:09:10.748 fused_ordering(116) 00:09:10.748 fused_ordering(117) 00:09:10.748 fused_ordering(118) 00:09:10.748 fused_ordering(119) 00:09:10.748 fused_ordering(120) 00:09:10.748 fused_ordering(121) 00:09:10.748 fused_ordering(122) 00:09:10.748 fused_ordering(123) 00:09:10.748 fused_ordering(124) 00:09:10.748 fused_ordering(125) 00:09:10.748 fused_ordering(126) 00:09:10.748 fused_ordering(127) 00:09:10.748 fused_ordering(128) 00:09:10.748 fused_ordering(129) 00:09:10.748 fused_ordering(130) 00:09:10.748 fused_ordering(131) 00:09:10.748 fused_ordering(132) 00:09:10.748 fused_ordering(133) 00:09:10.748 fused_ordering(134) 00:09:10.748 fused_ordering(135) 00:09:10.748 fused_ordering(136) 00:09:10.748 fused_ordering(137) 00:09:10.748 fused_ordering(138) 00:09:10.748 fused_ordering(139) 00:09:10.748 fused_ordering(140) 00:09:10.748 fused_ordering(141) 00:09:10.748 fused_ordering(142) 00:09:10.748 fused_ordering(143) 00:09:10.748 fused_ordering(144) 00:09:10.748 fused_ordering(145) 00:09:10.748 fused_ordering(146) 00:09:10.748 fused_ordering(147) 00:09:10.748 fused_ordering(148) 00:09:10.748 fused_ordering(149) 00:09:10.748 fused_ordering(150) 00:09:10.748 fused_ordering(151) 00:09:10.748 fused_ordering(152) 00:09:10.748 fused_ordering(153) 00:09:10.748 fused_ordering(154) 00:09:10.748 fused_ordering(155) 00:09:10.748 fused_ordering(156) 00:09:10.748 fused_ordering(157) 00:09:10.748 fused_ordering(158) 00:09:10.748 fused_ordering(159) 00:09:10.748 fused_ordering(160) 00:09:10.748 fused_ordering(161) 00:09:10.748 fused_ordering(162) 00:09:10.748 fused_ordering(163) 00:09:10.748 fused_ordering(164) 00:09:10.748 fused_ordering(165) 00:09:10.748 fused_ordering(166) 00:09:10.748 fused_ordering(167) 00:09:10.748 fused_ordering(168) 00:09:10.748 fused_ordering(169) 00:09:10.748 fused_ordering(170) 00:09:10.748 fused_ordering(171) 00:09:10.748 fused_ordering(172) 00:09:10.748 fused_ordering(173) 00:09:10.748 fused_ordering(174) 00:09:10.748 fused_ordering(175) 00:09:10.748 fused_ordering(176) 00:09:10.748 fused_ordering(177) 00:09:10.748 fused_ordering(178) 00:09:10.748 fused_ordering(179) 00:09:10.748 fused_ordering(180) 00:09:10.748 fused_ordering(181) 00:09:10.748 fused_ordering(182) 00:09:10.748 fused_ordering(183) 00:09:10.748 fused_ordering(184) 00:09:10.748 fused_ordering(185) 00:09:10.748 fused_ordering(186) 00:09:10.748 fused_ordering(187) 00:09:10.748 fused_ordering(188) 00:09:10.748 fused_ordering(189) 00:09:10.748 fused_ordering(190) 00:09:10.748 fused_ordering(191) 00:09:10.748 fused_ordering(192) 00:09:10.748 fused_ordering(193) 00:09:10.748 fused_ordering(194) 00:09:10.748 fused_ordering(195) 00:09:10.748 fused_ordering(196) 00:09:10.748 fused_ordering(197) 00:09:10.748 fused_ordering(198) 00:09:10.748 fused_ordering(199) 00:09:10.748 fused_ordering(200) 00:09:10.748 fused_ordering(201) 00:09:10.748 fused_ordering(202) 00:09:10.748 fused_ordering(203) 00:09:10.748 fused_ordering(204) 00:09:10.748 fused_ordering(205) 00:09:11.006 fused_ordering(206) 00:09:11.006 fused_ordering(207) 00:09:11.006 fused_ordering(208) 00:09:11.006 fused_ordering(209) 00:09:11.006 fused_ordering(210) 00:09:11.006 fused_ordering(211) 00:09:11.006 fused_ordering(212) 00:09:11.006 fused_ordering(213) 00:09:11.006 fused_ordering(214) 00:09:11.006 fused_ordering(215) 00:09:11.006 fused_ordering(216) 00:09:11.006 fused_ordering(217) 00:09:11.006 fused_ordering(218) 00:09:11.006 fused_ordering(219) 00:09:11.006 fused_ordering(220) 00:09:11.006 fused_ordering(221) 00:09:11.006 fused_ordering(222) 00:09:11.006 fused_ordering(223) 00:09:11.006 fused_ordering(224) 00:09:11.006 fused_ordering(225) 00:09:11.006 fused_ordering(226) 00:09:11.006 fused_ordering(227) 00:09:11.006 fused_ordering(228) 00:09:11.006 fused_ordering(229) 00:09:11.006 fused_ordering(230) 00:09:11.006 fused_ordering(231) 00:09:11.006 fused_ordering(232) 00:09:11.006 fused_ordering(233) 00:09:11.006 fused_ordering(234) 00:09:11.006 fused_ordering(235) 00:09:11.006 fused_ordering(236) 00:09:11.006 fused_ordering(237) 00:09:11.006 fused_ordering(238) 00:09:11.006 fused_ordering(239) 00:09:11.006 fused_ordering(240) 00:09:11.006 fused_ordering(241) 00:09:11.006 fused_ordering(242) 00:09:11.006 fused_ordering(243) 00:09:11.006 fused_ordering(244) 00:09:11.006 fused_ordering(245) 00:09:11.006 fused_ordering(246) 00:09:11.006 fused_ordering(247) 00:09:11.006 fused_ordering(248) 00:09:11.006 fused_ordering(249) 00:09:11.006 fused_ordering(250) 00:09:11.006 fused_ordering(251) 00:09:11.006 fused_ordering(252) 00:09:11.006 fused_ordering(253) 00:09:11.006 fused_ordering(254) 00:09:11.006 fused_ordering(255) 00:09:11.006 fused_ordering(256) 00:09:11.006 fused_ordering(257) 00:09:11.006 fused_ordering(258) 00:09:11.006 fused_ordering(259) 00:09:11.006 fused_ordering(260) 00:09:11.006 fused_ordering(261) 00:09:11.006 fused_ordering(262) 00:09:11.006 fused_ordering(263) 00:09:11.006 fused_ordering(264) 00:09:11.006 fused_ordering(265) 00:09:11.006 fused_ordering(266) 00:09:11.006 fused_ordering(267) 00:09:11.006 fused_ordering(268) 00:09:11.006 fused_ordering(269) 00:09:11.006 fused_ordering(270) 00:09:11.006 fused_ordering(271) 00:09:11.006 fused_ordering(272) 00:09:11.006 fused_ordering(273) 00:09:11.006 fused_ordering(274) 00:09:11.006 fused_ordering(275) 00:09:11.006 fused_ordering(276) 00:09:11.006 fused_ordering(277) 00:09:11.006 fused_ordering(278) 00:09:11.006 fused_ordering(279) 00:09:11.006 fused_ordering(280) 00:09:11.006 fused_ordering(281) 00:09:11.006 fused_ordering(282) 00:09:11.006 fused_ordering(283) 00:09:11.006 fused_ordering(284) 00:09:11.006 fused_ordering(285) 00:09:11.006 fused_ordering(286) 00:09:11.006 fused_ordering(287) 00:09:11.006 fused_ordering(288) 00:09:11.006 fused_ordering(289) 00:09:11.006 fused_ordering(290) 00:09:11.006 fused_ordering(291) 00:09:11.006 fused_ordering(292) 00:09:11.006 fused_ordering(293) 00:09:11.006 fused_ordering(294) 00:09:11.006 fused_ordering(295) 00:09:11.006 fused_ordering(296) 00:09:11.006 fused_ordering(297) 00:09:11.006 fused_ordering(298) 00:09:11.006 fused_ordering(299) 00:09:11.006 fused_ordering(300) 00:09:11.006 fused_ordering(301) 00:09:11.006 fused_ordering(302) 00:09:11.006 fused_ordering(303) 00:09:11.006 fused_ordering(304) 00:09:11.006 fused_ordering(305) 00:09:11.006 fused_ordering(306) 00:09:11.006 fused_ordering(307) 00:09:11.006 fused_ordering(308) 00:09:11.006 fused_ordering(309) 00:09:11.006 fused_ordering(310) 00:09:11.006 fused_ordering(311) 00:09:11.006 fused_ordering(312) 00:09:11.006 fused_ordering(313) 00:09:11.006 fused_ordering(314) 00:09:11.006 fused_ordering(315) 00:09:11.006 fused_ordering(316) 00:09:11.006 fused_ordering(317) 00:09:11.006 fused_ordering(318) 00:09:11.006 fused_ordering(319) 00:09:11.006 fused_ordering(320) 00:09:11.006 fused_ordering(321) 00:09:11.006 fused_ordering(322) 00:09:11.006 fused_ordering(323) 00:09:11.006 fused_ordering(324) 00:09:11.006 fused_ordering(325) 00:09:11.006 fused_ordering(326) 00:09:11.006 fused_ordering(327) 00:09:11.006 fused_ordering(328) 00:09:11.006 fused_ordering(329) 00:09:11.006 fused_ordering(330) 00:09:11.006 fused_ordering(331) 00:09:11.006 fused_ordering(332) 00:09:11.006 fused_ordering(333) 00:09:11.006 fused_ordering(334) 00:09:11.006 fused_ordering(335) 00:09:11.006 fused_ordering(336) 00:09:11.006 fused_ordering(337) 00:09:11.006 fused_ordering(338) 00:09:11.006 fused_ordering(339) 00:09:11.006 fused_ordering(340) 00:09:11.006 fused_ordering(341) 00:09:11.006 fused_ordering(342) 00:09:11.006 fused_ordering(343) 00:09:11.006 fused_ordering(344) 00:09:11.006 fused_ordering(345) 00:09:11.006 fused_ordering(346) 00:09:11.006 fused_ordering(347) 00:09:11.006 fused_ordering(348) 00:09:11.006 fused_ordering(349) 00:09:11.006 fused_ordering(350) 00:09:11.006 fused_ordering(351) 00:09:11.006 fused_ordering(352) 00:09:11.006 fused_ordering(353) 00:09:11.006 fused_ordering(354) 00:09:11.006 fused_ordering(355) 00:09:11.006 fused_ordering(356) 00:09:11.006 fused_ordering(357) 00:09:11.006 fused_ordering(358) 00:09:11.006 fused_ordering(359) 00:09:11.006 fused_ordering(360) 00:09:11.006 fused_ordering(361) 00:09:11.006 fused_ordering(362) 00:09:11.006 fused_ordering(363) 00:09:11.006 fused_ordering(364) 00:09:11.006 fused_ordering(365) 00:09:11.006 fused_ordering(366) 00:09:11.006 fused_ordering(367) 00:09:11.006 fused_ordering(368) 00:09:11.006 fused_ordering(369) 00:09:11.006 fused_ordering(370) 00:09:11.006 fused_ordering(371) 00:09:11.006 fused_ordering(372) 00:09:11.006 fused_ordering(373) 00:09:11.006 fused_ordering(374) 00:09:11.006 fused_ordering(375) 00:09:11.006 fused_ordering(376) 00:09:11.006 fused_ordering(377) 00:09:11.006 fused_ordering(378) 00:09:11.006 fused_ordering(379) 00:09:11.006 fused_ordering(380) 00:09:11.006 fused_ordering(381) 00:09:11.006 fused_ordering(382) 00:09:11.006 fused_ordering(383) 00:09:11.006 fused_ordering(384) 00:09:11.006 fused_ordering(385) 00:09:11.006 fused_ordering(386) 00:09:11.006 fused_ordering(387) 00:09:11.006 fused_ordering(388) 00:09:11.006 fused_ordering(389) 00:09:11.006 fused_ordering(390) 00:09:11.006 fused_ordering(391) 00:09:11.006 fused_ordering(392) 00:09:11.006 fused_ordering(393) 00:09:11.006 fused_ordering(394) 00:09:11.006 fused_ordering(395) 00:09:11.006 fused_ordering(396) 00:09:11.006 fused_ordering(397) 00:09:11.006 fused_ordering(398) 00:09:11.006 fused_ordering(399) 00:09:11.006 fused_ordering(400) 00:09:11.006 fused_ordering(401) 00:09:11.006 fused_ordering(402) 00:09:11.006 fused_ordering(403) 00:09:11.006 fused_ordering(404) 00:09:11.006 fused_ordering(405) 00:09:11.006 fused_ordering(406) 00:09:11.006 fused_ordering(407) 00:09:11.006 fused_ordering(408) 00:09:11.006 fused_ordering(409) 00:09:11.006 fused_ordering(410) 00:09:11.571 fused_ordering(411) 00:09:11.571 fused_ordering(412) 00:09:11.571 fused_ordering(413) 00:09:11.571 fused_ordering(414) 00:09:11.571 fused_ordering(415) 00:09:11.571 fused_ordering(416) 00:09:11.571 fused_ordering(417) 00:09:11.571 fused_ordering(418) 00:09:11.571 fused_ordering(419) 00:09:11.571 fused_ordering(420) 00:09:11.571 fused_ordering(421) 00:09:11.571 fused_ordering(422) 00:09:11.571 fused_ordering(423) 00:09:11.571 fused_ordering(424) 00:09:11.571 fused_ordering(425) 00:09:11.571 fused_ordering(426) 00:09:11.571 fused_ordering(427) 00:09:11.571 fused_ordering(428) 00:09:11.571 fused_ordering(429) 00:09:11.571 fused_ordering(430) 00:09:11.571 fused_ordering(431) 00:09:11.571 fused_ordering(432) 00:09:11.571 fused_ordering(433) 00:09:11.571 fused_ordering(434) 00:09:11.571 fused_ordering(435) 00:09:11.571 fused_ordering(436) 00:09:11.571 fused_ordering(437) 00:09:11.571 fused_ordering(438) 00:09:11.571 fused_ordering(439) 00:09:11.571 fused_ordering(440) 00:09:11.571 fused_ordering(441) 00:09:11.571 fused_ordering(442) 00:09:11.571 fused_ordering(443) 00:09:11.571 fused_ordering(444) 00:09:11.571 fused_ordering(445) 00:09:11.571 fused_ordering(446) 00:09:11.571 fused_ordering(447) 00:09:11.571 fused_ordering(448) 00:09:11.571 fused_ordering(449) 00:09:11.571 fused_ordering(450) 00:09:11.571 fused_ordering(451) 00:09:11.571 fused_ordering(452) 00:09:11.571 fused_ordering(453) 00:09:11.571 fused_ordering(454) 00:09:11.571 fused_ordering(455) 00:09:11.571 fused_ordering(456) 00:09:11.571 fused_ordering(457) 00:09:11.571 fused_ordering(458) 00:09:11.571 fused_ordering(459) 00:09:11.571 fused_ordering(460) 00:09:11.571 fused_ordering(461) 00:09:11.571 fused_ordering(462) 00:09:11.571 fused_ordering(463) 00:09:11.571 fused_ordering(464) 00:09:11.572 fused_ordering(465) 00:09:11.572 fused_ordering(466) 00:09:11.572 fused_ordering(467) 00:09:11.572 fused_ordering(468) 00:09:11.572 fused_ordering(469) 00:09:11.572 fused_ordering(470) 00:09:11.572 fused_ordering(471) 00:09:11.572 fused_ordering(472) 00:09:11.572 fused_ordering(473) 00:09:11.572 fused_ordering(474) 00:09:11.572 fused_ordering(475) 00:09:11.572 fused_ordering(476) 00:09:11.572 fused_ordering(477) 00:09:11.572 fused_ordering(478) 00:09:11.572 fused_ordering(479) 00:09:11.572 fused_ordering(480) 00:09:11.572 fused_ordering(481) 00:09:11.572 fused_ordering(482) 00:09:11.572 fused_ordering(483) 00:09:11.572 fused_ordering(484) 00:09:11.572 fused_ordering(485) 00:09:11.572 fused_ordering(486) 00:09:11.572 fused_ordering(487) 00:09:11.572 fused_ordering(488) 00:09:11.572 fused_ordering(489) 00:09:11.572 fused_ordering(490) 00:09:11.572 fused_ordering(491) 00:09:11.572 fused_ordering(492) 00:09:11.572 fused_ordering(493) 00:09:11.572 fused_ordering(494) 00:09:11.572 fused_ordering(495) 00:09:11.572 fused_ordering(496) 00:09:11.572 fused_ordering(497) 00:09:11.572 fused_ordering(498) 00:09:11.572 fused_ordering(499) 00:09:11.572 fused_ordering(500) 00:09:11.572 fused_ordering(501) 00:09:11.572 fused_ordering(502) 00:09:11.572 fused_ordering(503) 00:09:11.572 fused_ordering(504) 00:09:11.572 fused_ordering(505) 00:09:11.572 fused_ordering(506) 00:09:11.572 fused_ordering(507) 00:09:11.572 fused_ordering(508) 00:09:11.572 fused_ordering(509) 00:09:11.572 fused_ordering(510) 00:09:11.572 fused_ordering(511) 00:09:11.572 fused_ordering(512) 00:09:11.572 fused_ordering(513) 00:09:11.572 fused_ordering(514) 00:09:11.572 fused_ordering(515) 00:09:11.572 fused_ordering(516) 00:09:11.572 fused_ordering(517) 00:09:11.572 fused_ordering(518) 00:09:11.572 fused_ordering(519) 00:09:11.572 fused_ordering(520) 00:09:11.572 fused_ordering(521) 00:09:11.572 fused_ordering(522) 00:09:11.572 fused_ordering(523) 00:09:11.572 fused_ordering(524) 00:09:11.572 fused_ordering(525) 00:09:11.572 fused_ordering(526) 00:09:11.572 fused_ordering(527) 00:09:11.572 fused_ordering(528) 00:09:11.572 fused_ordering(529) 00:09:11.572 fused_ordering(530) 00:09:11.572 fused_ordering(531) 00:09:11.572 fused_ordering(532) 00:09:11.572 fused_ordering(533) 00:09:11.572 fused_ordering(534) 00:09:11.572 fused_ordering(535) 00:09:11.572 fused_ordering(536) 00:09:11.572 fused_ordering(537) 00:09:11.572 fused_ordering(538) 00:09:11.572 fused_ordering(539) 00:09:11.572 fused_ordering(540) 00:09:11.572 fused_ordering(541) 00:09:11.572 fused_ordering(542) 00:09:11.572 fused_ordering(543) 00:09:11.572 fused_ordering(544) 00:09:11.572 fused_ordering(545) 00:09:11.572 fused_ordering(546) 00:09:11.572 fused_ordering(547) 00:09:11.572 fused_ordering(548) 00:09:11.572 fused_ordering(549) 00:09:11.572 fused_ordering(550) 00:09:11.572 fused_ordering(551) 00:09:11.572 fused_ordering(552) 00:09:11.572 fused_ordering(553) 00:09:11.572 fused_ordering(554) 00:09:11.572 fused_ordering(555) 00:09:11.572 fused_ordering(556) 00:09:11.572 fused_ordering(557) 00:09:11.572 fused_ordering(558) 00:09:11.572 fused_ordering(559) 00:09:11.572 fused_ordering(560) 00:09:11.572 fused_ordering(561) 00:09:11.572 fused_ordering(562) 00:09:11.572 fused_ordering(563) 00:09:11.572 fused_ordering(564) 00:09:11.572 fused_ordering(565) 00:09:11.572 fused_ordering(566) 00:09:11.572 fused_ordering(567) 00:09:11.572 fused_ordering(568) 00:09:11.572 fused_ordering(569) 00:09:11.572 fused_ordering(570) 00:09:11.572 fused_ordering(571) 00:09:11.572 fused_ordering(572) 00:09:11.572 fused_ordering(573) 00:09:11.572 fused_ordering(574) 00:09:11.572 fused_ordering(575) 00:09:11.572 fused_ordering(576) 00:09:11.572 fused_ordering(577) 00:09:11.572 fused_ordering(578) 00:09:11.572 fused_ordering(579) 00:09:11.572 fused_ordering(580) 00:09:11.572 fused_ordering(581) 00:09:11.572 fused_ordering(582) 00:09:11.572 fused_ordering(583) 00:09:11.572 fused_ordering(584) 00:09:11.572 fused_ordering(585) 00:09:11.572 fused_ordering(586) 00:09:11.572 fused_ordering(587) 00:09:11.572 fused_ordering(588) 00:09:11.572 fused_ordering(589) 00:09:11.572 fused_ordering(590) 00:09:11.572 fused_ordering(591) 00:09:11.572 fused_ordering(592) 00:09:11.572 fused_ordering(593) 00:09:11.572 fused_ordering(594) 00:09:11.572 fused_ordering(595) 00:09:11.572 fused_ordering(596) 00:09:11.572 fused_ordering(597) 00:09:11.572 fused_ordering(598) 00:09:11.572 fused_ordering(599) 00:09:11.572 fused_ordering(600) 00:09:11.572 fused_ordering(601) 00:09:11.572 fused_ordering(602) 00:09:11.572 fused_ordering(603) 00:09:11.572 fused_ordering(604) 00:09:11.572 fused_ordering(605) 00:09:11.572 fused_ordering(606) 00:09:11.572 fused_ordering(607) 00:09:11.572 fused_ordering(608) 00:09:11.572 fused_ordering(609) 00:09:11.572 fused_ordering(610) 00:09:11.572 fused_ordering(611) 00:09:11.572 fused_ordering(612) 00:09:11.572 fused_ordering(613) 00:09:11.572 fused_ordering(614) 00:09:11.572 fused_ordering(615) 00:09:12.137 fused_ordering(616) 00:09:12.137 fused_ordering(617) 00:09:12.137 fused_ordering(618) 00:09:12.137 fused_ordering(619) 00:09:12.137 fused_ordering(620) 00:09:12.137 fused_ordering(621) 00:09:12.137 fused_ordering(622) 00:09:12.137 fused_ordering(623) 00:09:12.137 fused_ordering(624) 00:09:12.137 fused_ordering(625) 00:09:12.137 fused_ordering(626) 00:09:12.137 fused_ordering(627) 00:09:12.137 fused_ordering(628) 00:09:12.137 fused_ordering(629) 00:09:12.137 fused_ordering(630) 00:09:12.137 fused_ordering(631) 00:09:12.137 fused_ordering(632) 00:09:12.137 fused_ordering(633) 00:09:12.137 fused_ordering(634) 00:09:12.137 fused_ordering(635) 00:09:12.137 fused_ordering(636) 00:09:12.137 fused_ordering(637) 00:09:12.137 fused_ordering(638) 00:09:12.137 fused_ordering(639) 00:09:12.137 fused_ordering(640) 00:09:12.137 fused_ordering(641) 00:09:12.137 fused_ordering(642) 00:09:12.137 fused_ordering(643) 00:09:12.137 fused_ordering(644) 00:09:12.137 fused_ordering(645) 00:09:12.137 fused_ordering(646) 00:09:12.137 fused_ordering(647) 00:09:12.137 fused_ordering(648) 00:09:12.137 fused_ordering(649) 00:09:12.137 fused_ordering(650) 00:09:12.137 fused_ordering(651) 00:09:12.137 fused_ordering(652) 00:09:12.137 fused_ordering(653) 00:09:12.137 fused_ordering(654) 00:09:12.137 fused_ordering(655) 00:09:12.137 fused_ordering(656) 00:09:12.137 fused_ordering(657) 00:09:12.137 fused_ordering(658) 00:09:12.137 fused_ordering(659) 00:09:12.137 fused_ordering(660) 00:09:12.137 fused_ordering(661) 00:09:12.137 fused_ordering(662) 00:09:12.137 fused_ordering(663) 00:09:12.137 fused_ordering(664) 00:09:12.137 fused_ordering(665) 00:09:12.137 fused_ordering(666) 00:09:12.137 fused_ordering(667) 00:09:12.137 fused_ordering(668) 00:09:12.137 fused_ordering(669) 00:09:12.137 fused_ordering(670) 00:09:12.137 fused_ordering(671) 00:09:12.137 fused_ordering(672) 00:09:12.137 fused_ordering(673) 00:09:12.137 fused_ordering(674) 00:09:12.137 fused_ordering(675) 00:09:12.137 fused_ordering(676) 00:09:12.137 fused_ordering(677) 00:09:12.137 fused_ordering(678) 00:09:12.137 fused_ordering(679) 00:09:12.137 fused_ordering(680) 00:09:12.137 fused_ordering(681) 00:09:12.137 fused_ordering(682) 00:09:12.137 fused_ordering(683) 00:09:12.137 fused_ordering(684) 00:09:12.137 fused_ordering(685) 00:09:12.137 fused_ordering(686) 00:09:12.137 fused_ordering(687) 00:09:12.137 fused_ordering(688) 00:09:12.137 fused_ordering(689) 00:09:12.137 fused_ordering(690) 00:09:12.137 fused_ordering(691) 00:09:12.137 fused_ordering(692) 00:09:12.137 fused_ordering(693) 00:09:12.137 fused_ordering(694) 00:09:12.137 fused_ordering(695) 00:09:12.137 fused_ordering(696) 00:09:12.137 fused_ordering(697) 00:09:12.137 fused_ordering(698) 00:09:12.137 fused_ordering(699) 00:09:12.137 fused_ordering(700) 00:09:12.137 fused_ordering(701) 00:09:12.137 fused_ordering(702) 00:09:12.137 fused_ordering(703) 00:09:12.137 fused_ordering(704) 00:09:12.137 fused_ordering(705) 00:09:12.137 fused_ordering(706) 00:09:12.137 fused_ordering(707) 00:09:12.137 fused_ordering(708) 00:09:12.137 fused_ordering(709) 00:09:12.137 fused_ordering(710) 00:09:12.137 fused_ordering(711) 00:09:12.137 fused_ordering(712) 00:09:12.137 fused_ordering(713) 00:09:12.137 fused_ordering(714) 00:09:12.137 fused_ordering(715) 00:09:12.137 fused_ordering(716) 00:09:12.137 fused_ordering(717) 00:09:12.137 fused_ordering(718) 00:09:12.137 fused_ordering(719) 00:09:12.137 fused_ordering(720) 00:09:12.137 fused_ordering(721) 00:09:12.137 fused_ordering(722) 00:09:12.137 fused_ordering(723) 00:09:12.137 fused_ordering(724) 00:09:12.137 fused_ordering(725) 00:09:12.137 fused_ordering(726) 00:09:12.137 fused_ordering(727) 00:09:12.137 fused_ordering(728) 00:09:12.137 fused_ordering(729) 00:09:12.137 fused_ordering(730) 00:09:12.137 fused_ordering(731) 00:09:12.137 fused_ordering(732) 00:09:12.137 fused_ordering(733) 00:09:12.137 fused_ordering(734) 00:09:12.137 fused_ordering(735) 00:09:12.137 fused_ordering(736) 00:09:12.137 fused_ordering(737) 00:09:12.137 fused_ordering(738) 00:09:12.137 fused_ordering(739) 00:09:12.137 fused_ordering(740) 00:09:12.137 fused_ordering(741) 00:09:12.137 fused_ordering(742) 00:09:12.137 fused_ordering(743) 00:09:12.137 fused_ordering(744) 00:09:12.137 fused_ordering(745) 00:09:12.137 fused_ordering(746) 00:09:12.137 fused_ordering(747) 00:09:12.137 fused_ordering(748) 00:09:12.137 fused_ordering(749) 00:09:12.137 fused_ordering(750) 00:09:12.137 fused_ordering(751) 00:09:12.137 fused_ordering(752) 00:09:12.137 fused_ordering(753) 00:09:12.137 fused_ordering(754) 00:09:12.137 fused_ordering(755) 00:09:12.137 fused_ordering(756) 00:09:12.137 fused_ordering(757) 00:09:12.137 fused_ordering(758) 00:09:12.137 fused_ordering(759) 00:09:12.137 fused_ordering(760) 00:09:12.137 fused_ordering(761) 00:09:12.137 fused_ordering(762) 00:09:12.137 fused_ordering(763) 00:09:12.137 fused_ordering(764) 00:09:12.137 fused_ordering(765) 00:09:12.137 fused_ordering(766) 00:09:12.137 fused_ordering(767) 00:09:12.137 fused_ordering(768) 00:09:12.137 fused_ordering(769) 00:09:12.137 fused_ordering(770) 00:09:12.137 fused_ordering(771) 00:09:12.137 fused_ordering(772) 00:09:12.137 fused_ordering(773) 00:09:12.137 fused_ordering(774) 00:09:12.137 fused_ordering(775) 00:09:12.137 fused_ordering(776) 00:09:12.137 fused_ordering(777) 00:09:12.137 fused_ordering(778) 00:09:12.137 fused_ordering(779) 00:09:12.137 fused_ordering(780) 00:09:12.138 fused_ordering(781) 00:09:12.138 fused_ordering(782) 00:09:12.138 fused_ordering(783) 00:09:12.138 fused_ordering(784) 00:09:12.138 fused_ordering(785) 00:09:12.138 fused_ordering(786) 00:09:12.138 fused_ordering(787) 00:09:12.138 fused_ordering(788) 00:09:12.138 fused_ordering(789) 00:09:12.138 fused_ordering(790) 00:09:12.138 fused_ordering(791) 00:09:12.138 fused_ordering(792) 00:09:12.138 fused_ordering(793) 00:09:12.138 fused_ordering(794) 00:09:12.138 fused_ordering(795) 00:09:12.138 fused_ordering(796) 00:09:12.138 fused_ordering(797) 00:09:12.138 fused_ordering(798) 00:09:12.138 fused_ordering(799) 00:09:12.138 fused_ordering(800) 00:09:12.138 fused_ordering(801) 00:09:12.138 fused_ordering(802) 00:09:12.138 fused_ordering(803) 00:09:12.138 fused_ordering(804) 00:09:12.138 fused_ordering(805) 00:09:12.138 fused_ordering(806) 00:09:12.138 fused_ordering(807) 00:09:12.138 fused_ordering(808) 00:09:12.138 fused_ordering(809) 00:09:12.138 fused_ordering(810) 00:09:12.138 fused_ordering(811) 00:09:12.138 fused_ordering(812) 00:09:12.138 fused_ordering(813) 00:09:12.138 fused_ordering(814) 00:09:12.138 fused_ordering(815) 00:09:12.138 fused_ordering(816) 00:09:12.138 fused_ordering(817) 00:09:12.138 fused_ordering(818) 00:09:12.138 fused_ordering(819) 00:09:12.138 fused_ordering(820) 00:09:12.704 fused_ordering(821) 00:09:12.704 fused_ordering(822) 00:09:12.704 fused_ordering(823) 00:09:12.704 fused_ordering(824) 00:09:12.704 fused_ordering(825) 00:09:12.704 fused_ordering(826) 00:09:12.704 fused_ordering(827) 00:09:12.704 fused_ordering(828) 00:09:12.704 fused_ordering(829) 00:09:12.704 fused_ordering(830) 00:09:12.704 fused_ordering(831) 00:09:12.704 fused_ordering(832) 00:09:12.704 fused_ordering(833) 00:09:12.704 fused_ordering(834) 00:09:12.704 fused_ordering(835) 00:09:12.704 fused_ordering(836) 00:09:12.704 fused_ordering(837) 00:09:12.704 fused_ordering(838) 00:09:12.704 fused_ordering(839) 00:09:12.704 fused_ordering(840) 00:09:12.704 fused_ordering(841) 00:09:12.704 fused_ordering(842) 00:09:12.704 fused_ordering(843) 00:09:12.704 fused_ordering(844) 00:09:12.704 fused_ordering(845) 00:09:12.704 fused_ordering(846) 00:09:12.704 fused_ordering(847) 00:09:12.704 fused_ordering(848) 00:09:12.704 fused_ordering(849) 00:09:12.704 fused_ordering(850) 00:09:12.704 fused_ordering(851) 00:09:12.704 fused_ordering(852) 00:09:12.704 fused_ordering(853) 00:09:12.704 fused_ordering(854) 00:09:12.704 fused_ordering(855) 00:09:12.704 fused_ordering(856) 00:09:12.704 fused_ordering(857) 00:09:12.704 fused_ordering(858) 00:09:12.704 fused_ordering(859) 00:09:12.704 fused_ordering(860) 00:09:12.704 fused_ordering(861) 00:09:12.704 fused_ordering(862) 00:09:12.704 fused_ordering(863) 00:09:12.704 fused_ordering(864) 00:09:12.704 fused_ordering(865) 00:09:12.704 fused_ordering(866) 00:09:12.704 fused_ordering(867) 00:09:12.704 fused_ordering(868) 00:09:12.704 fused_ordering(869) 00:09:12.704 fused_ordering(870) 00:09:12.704 fused_ordering(871) 00:09:12.704 fused_ordering(872) 00:09:12.704 fused_ordering(873) 00:09:12.704 fused_ordering(874) 00:09:12.704 fused_ordering(875) 00:09:12.704 fused_ordering(876) 00:09:12.704 fused_ordering(877) 00:09:12.704 fused_ordering(878) 00:09:12.704 fused_ordering(879) 00:09:12.704 fused_ordering(880) 00:09:12.704 fused_ordering(881) 00:09:12.704 fused_ordering(882) 00:09:12.704 fused_ordering(883) 00:09:12.704 fused_ordering(884) 00:09:12.704 fused_ordering(885) 00:09:12.704 fused_ordering(886) 00:09:12.704 fused_ordering(887) 00:09:12.704 fused_ordering(888) 00:09:12.704 fused_ordering(889) 00:09:12.704 fused_ordering(890) 00:09:12.704 fused_ordering(891) 00:09:12.704 fused_ordering(892) 00:09:12.704 fused_ordering(893) 00:09:12.704 fused_ordering(894) 00:09:12.704 fused_ordering(895) 00:09:12.704 fused_ordering(896) 00:09:12.704 fused_ordering(897) 00:09:12.704 fused_ordering(898) 00:09:12.704 fused_ordering(899) 00:09:12.704 fused_ordering(900) 00:09:12.704 fused_ordering(901) 00:09:12.704 fused_ordering(902) 00:09:12.704 fused_ordering(903) 00:09:12.704 fused_ordering(904) 00:09:12.704 fused_ordering(905) 00:09:12.704 fused_ordering(906) 00:09:12.704 fused_ordering(907) 00:09:12.704 fused_ordering(908) 00:09:12.704 fused_ordering(909) 00:09:12.704 fused_ordering(910) 00:09:12.704 fused_ordering(911) 00:09:12.704 fused_ordering(912) 00:09:12.704 fused_ordering(913) 00:09:12.704 fused_ordering(914) 00:09:12.704 fused_ordering(915) 00:09:12.704 fused_ordering(916) 00:09:12.704 fused_ordering(917) 00:09:12.704 fused_ordering(918) 00:09:12.704 fused_ordering(919) 00:09:12.704 fused_ordering(920) 00:09:12.704 fused_ordering(921) 00:09:12.704 fused_ordering(922) 00:09:12.704 fused_ordering(923) 00:09:12.704 fused_ordering(924) 00:09:12.704 fused_ordering(925) 00:09:12.704 fused_ordering(926) 00:09:12.704 fused_ordering(927) 00:09:12.704 fused_ordering(928) 00:09:12.704 fused_ordering(929) 00:09:12.704 fused_ordering(930) 00:09:12.704 fused_ordering(931) 00:09:12.704 fused_ordering(932) 00:09:12.704 fused_ordering(933) 00:09:12.704 fused_ordering(934) 00:09:12.704 fused_ordering(935) 00:09:12.704 fused_ordering(936) 00:09:12.704 fused_ordering(937) 00:09:12.704 fused_ordering(938) 00:09:12.704 fused_ordering(939) 00:09:12.704 fused_ordering(940) 00:09:12.704 fused_ordering(941) 00:09:12.704 fused_ordering(942) 00:09:12.704 fused_ordering(943) 00:09:12.704 fused_ordering(944) 00:09:12.704 fused_ordering(945) 00:09:12.704 fused_ordering(946) 00:09:12.704 fused_ordering(947) 00:09:12.704 fused_ordering(948) 00:09:12.704 fused_ordering(949) 00:09:12.704 fused_ordering(950) 00:09:12.704 fused_ordering(951) 00:09:12.704 fused_ordering(952) 00:09:12.704 fused_ordering(953) 00:09:12.704 fused_ordering(954) 00:09:12.704 fused_ordering(955) 00:09:12.704 fused_ordering(956) 00:09:12.704 fused_ordering(957) 00:09:12.704 fused_ordering(958) 00:09:12.704 fused_ordering(959) 00:09:12.704 fused_ordering(960) 00:09:12.704 fused_ordering(961) 00:09:12.704 fused_ordering(962) 00:09:12.704 fused_ordering(963) 00:09:12.704 fused_ordering(964) 00:09:12.704 fused_ordering(965) 00:09:12.704 fused_ordering(966) 00:09:12.704 fused_ordering(967) 00:09:12.704 fused_ordering(968) 00:09:12.704 fused_ordering(969) 00:09:12.704 fused_ordering(970) 00:09:12.704 fused_ordering(971) 00:09:12.704 fused_ordering(972) 00:09:12.704 fused_ordering(973) 00:09:12.704 fused_ordering(974) 00:09:12.704 fused_ordering(975) 00:09:12.704 fused_ordering(976) 00:09:12.704 fused_ordering(977) 00:09:12.704 fused_ordering(978) 00:09:12.704 fused_ordering(979) 00:09:12.704 fused_ordering(980) 00:09:12.704 fused_ordering(981) 00:09:12.704 fused_ordering(982) 00:09:12.704 fused_ordering(983) 00:09:12.704 fused_ordering(984) 00:09:12.705 fused_ordering(985) 00:09:12.705 fused_ordering(986) 00:09:12.705 fused_ordering(987) 00:09:12.705 fused_ordering(988) 00:09:12.705 fused_ordering(989) 00:09:12.705 fused_ordering(990) 00:09:12.705 fused_ordering(991) 00:09:12.705 fused_ordering(992) 00:09:12.705 fused_ordering(993) 00:09:12.705 fused_ordering(994) 00:09:12.705 fused_ordering(995) 00:09:12.705 fused_ordering(996) 00:09:12.705 fused_ordering(997) 00:09:12.705 fused_ordering(998) 00:09:12.705 fused_ordering(999) 00:09:12.705 fused_ordering(1000) 00:09:12.705 fused_ordering(1001) 00:09:12.705 fused_ordering(1002) 00:09:12.705 fused_ordering(1003) 00:09:12.705 fused_ordering(1004) 00:09:12.705 fused_ordering(1005) 00:09:12.705 fused_ordering(1006) 00:09:12.705 fused_ordering(1007) 00:09:12.705 fused_ordering(1008) 00:09:12.705 fused_ordering(1009) 00:09:12.705 fused_ordering(1010) 00:09:12.705 fused_ordering(1011) 00:09:12.705 fused_ordering(1012) 00:09:12.705 fused_ordering(1013) 00:09:12.705 fused_ordering(1014) 00:09:12.705 fused_ordering(1015) 00:09:12.705 fused_ordering(1016) 00:09:12.705 fused_ordering(1017) 00:09:12.705 fused_ordering(1018) 00:09:12.705 fused_ordering(1019) 00:09:12.705 fused_ordering(1020) 00:09:12.705 fused_ordering(1021) 00:09:12.705 fused_ordering(1022) 00:09:12.705 fused_ordering(1023) 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.705 rmmod nvme_tcp 00:09:12.705 rmmod nvme_fabrics 00:09:12.705 rmmod nvme_keyring 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4082857 ']' 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4082857 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4082857 ']' 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4082857 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4082857 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4082857' 00:09:12.705 killing process with pid 4082857 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4082857 00:09:12.705 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4082857 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.984 01:02:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.527 01:02:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.527 00:09:15.527 real 0m7.728s 00:09:15.527 user 0m5.256s 00:09:15.527 sys 0m3.357s 00:09:15.527 01:02:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.527 01:02:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.527 ************************************ 00:09:15.527 END TEST nvmf_fused_ordering 00:09:15.527 ************************************ 00:09:15.527 01:02:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:15.527 01:02:30 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:15.527 01:02:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.527 01:02:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.527 01:02:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.527 ************************************ 00:09:15.527 START TEST nvmf_delete_subsystem 00:09:15.527 ************************************ 00:09:15.527 01:02:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:15.527 * Looking for test storage... 00:09:15.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:15.527 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.528 01:02:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:17.430 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:17.430 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:17.430 Found net devices under 0000:09:00.0: cvl_0_0 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:17.430 Found net devices under 0000:09:00.1: cvl_0_1 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:17.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:09:17.430 00:09:17.430 --- 10.0.0.2 ping statistics --- 00:09:17.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.430 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:09:17.430 00:09:17.430 --- 10.0.0.1 ping statistics --- 00:09:17.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.430 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4085204 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4085204 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4085204 ']' 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.430 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.430 [2024-07-16 01:02:33.394232] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:09:17.430 [2024-07-16 01:02:33.394335] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.688 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.688 [2024-07-16 01:02:33.460821] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.688 [2024-07-16 01:02:33.570372] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.688 [2024-07-16 01:02:33.570433] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.688 [2024-07-16 01:02:33.570450] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.688 [2024-07-16 01:02:33.570461] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.688 [2024-07-16 01:02:33.570470] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.688 [2024-07-16 01:02:33.570554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.688 [2024-07-16 01:02:33.570560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 [2024-07-16 01:02:33.712121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 [2024-07-16 01:02:33.728345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 NULL1 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 Delay0 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4085234 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:17.944 01:02:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:17.944 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.944 [2024-07-16 01:02:33.803050] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:19.837 01:02:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.837 01:02:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.837 01:02:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 [2024-07-16 01:02:35.933460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caaf0 is same with the state(5) to be set 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 starting I/O failed: -6 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 [2024-07-16 01:02:35.934192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd32000d370 is same with the state(5) to be set 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Read completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:20.093 Write completed with error (sct=0, sc=8) 00:09:21.024 [2024-07-16 01:02:36.899357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cba70 is same with the state(5) to be set 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 [2024-07-16 01:02:36.934982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd32000d020 is same with the state(5) to be set 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 [2024-07-16 01:02:36.935520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd32000d6c0 is same with the state(5) to be set 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 [2024-07-16 01:02:36.935716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cae40 is same with the state(5) to be set 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Read completed with error (sct=0, sc=8) 00:09:21.024 Write completed with error (sct=0, sc=8) 00:09:21.024 [2024-07-16 01:02:36.936851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca390 is same with the state(5) to be set 00:09:21.024 Initializing NVMe Controllers 00:09:21.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:21.024 Controller IO queue size 128, less than required. 00:09:21.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:21.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:21.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:21.024 Initialization complete. Launching workers. 00:09:21.024 ======================================================== 00:09:21.024 Latency(us) 00:09:21.024 Device Information : IOPS MiB/s Average min max 00:09:21.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.30 0.08 929782.59 642.58 2003471.46 00:09:21.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.32 0.08 984084.86 324.06 2003597.07 00:09:21.024 ======================================================== 00:09:21.024 Total : 323.62 0.16 956683.87 324.06 2003597.07 00:09:21.024 00:09:21.024 [2024-07-16 01:02:36.937573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cba70 (9): Bad file descriptor 00:09:21.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:21.024 01:02:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.024 01:02:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:21.024 01:02:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4085234 00:09:21.024 01:02:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4085234 00:09:21.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4085234) - No such process 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4085234 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4085234 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4085234 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.589 [2024-07-16 01:02:37.461595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4085760 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:21.589 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:21.589 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.589 [2024-07-16 01:02:37.524812] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:22.151 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.151 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:22.151 01:02:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:22.714 01:02:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.714 01:02:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:22.714 01:02:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.278 01:02:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.278 01:02:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:23.278 01:02:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.534 01:02:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.534 01:02:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:23.534 01:02:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.098 01:02:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.098 01:02:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:24.098 01:02:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.661 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.662 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:24.662 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.662 Initializing NVMe Controllers 00:09:24.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:24.662 Controller IO queue size 128, less than required. 00:09:24.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:24.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:24.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:24.662 Initialization complete. Launching workers. 00:09:24.662 ======================================================== 00:09:24.662 Latency(us) 00:09:24.662 Device Information : IOPS MiB/s Average min max 00:09:24.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003583.00 1000171.13 1011312.32 00:09:24.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006281.50 1000171.55 1043264.80 00:09:24.662 ======================================================== 00:09:24.662 Total : 256.00 0.12 1004932.25 1000171.13 1043264.80 00:09:24.662 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4085760 00:09:25.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4085760) - No such process 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4085760 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.227 01:02:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.227 rmmod nvme_tcp 00:09:25.227 rmmod nvme_fabrics 00:09:25.227 rmmod nvme_keyring 00:09:25.227 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.227 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:25.227 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:25.227 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4085204 ']' 00:09:25.227 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4085204 00:09:25.227 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4085204 ']' 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4085204 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4085204 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4085204' 00:09:25.228 killing process with pid 4085204 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4085204 00:09:25.228 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4085204 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.487 01:02:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.394 01:02:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.651 00:09:27.651 real 0m12.398s 00:09:27.651 user 0m27.791s 00:09:27.651 sys 0m2.965s 00:09:27.651 01:02:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.651 01:02:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.651 ************************************ 00:09:27.651 END TEST nvmf_delete_subsystem 00:09:27.651 ************************************ 00:09:27.651 01:02:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.651 01:02:43 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:27.651 01:02:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.651 01:02:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.651 01:02:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.651 ************************************ 00:09:27.651 START TEST nvmf_ns_masking 00:09:27.651 ************************************ 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:27.651 * Looking for test storage... 00:09:27.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7e3e8ce2-4f8a-4407-95c7-65d7f38e9a26 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7cceb329-bdec-4582-ac81-e35b76e17bb0 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=addf4707-f7dc-4c42-86b5-f2940958dc59 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.651 01:02:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:30.184 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:30.184 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:30.184 Found net devices under 0000:09:00.0: cvl_0_0 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:30.184 Found net devices under 0000:09:00.1: cvl_0_1 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.184 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:09:30.185 00:09:30.185 --- 10.0.0.2 ping statistics --- 00:09:30.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.185 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:30.185 00:09:30.185 --- 10.0.0.1 ping statistics --- 00:09:30.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.185 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4088109 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4088109 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4088109 ']' 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.185 01:02:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:30.185 [2024-07-16 01:02:45.888576] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:09:30.185 [2024-07-16 01:02:45.888665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.185 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.185 [2024-07-16 01:02:45.951584] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.185 [2024-07-16 01:02:46.050977] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.185 [2024-07-16 01:02:46.051032] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.185 [2024-07-16 01:02:46.051055] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.185 [2024-07-16 01:02:46.051065] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.185 [2024-07-16 01:02:46.051074] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.185 [2024-07-16 01:02:46.051100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.185 01:02:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.185 01:02:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:30.185 01:02:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.185 01:02:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.185 01:02:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:30.443 01:02:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.443 01:02:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:30.699 [2024-07-16 01:02:46.458694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.699 01:02:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:30.699 01:02:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:30.699 01:02:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:30.956 Malloc1 00:09:30.956 01:02:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:31.247 Malloc2 00:09:31.247 01:02:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:31.511 01:02:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:31.768 01:02:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.025 [2024-07-16 01:02:47.796881] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.025 01:02:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:32.025 01:02:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I addf4707-f7dc-4c42-86b5-f2940958dc59 -a 10.0.0.2 -s 4420 -i 4 00:09:32.025 01:02:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:32.025 01:02:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:32.025 01:02:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:32.026 01:02:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:32.026 01:02:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:34.547 [ 0]:0x1 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.547 01:02:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84cf001eee17484482868514f3bda1d2 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84cf001eee17484482868514f3bda1d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:34.547 [ 0]:0x1 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84cf001eee17484482868514f3bda1d2 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84cf001eee17484482868514f3bda1d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:34.547 [ 1]:0x2 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f914923ffb2c465cbaa8ab8aac73bdf9 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f914923ffb2c465cbaa8ab8aac73bdf9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.547 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.113 01:02:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:35.113 01:02:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:35.113 01:02:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I addf4707-f7dc-4c42-86b5-f2940958dc59 -a 10.0.0.2 -s 4420 -i 4 00:09:35.371 01:02:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:35.371 01:02:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:35.371 01:02:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.371 01:02:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:35.371 01:02:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:35.371 01:02:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:37.900 [ 0]:0x2 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f914923ffb2c465cbaa8ab8aac73bdf9 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f914923ffb2c465cbaa8ab8aac73bdf9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:37.900 [ 0]:0x1 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84cf001eee17484482868514f3bda1d2 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84cf001eee17484482868514f3bda1d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:37.900 [ 1]:0x2 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f914923ffb2c465cbaa8ab8aac73bdf9 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f914923ffb2c465cbaa8ab8aac73bdf9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.900 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:38.158 01:02:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:38.158 [ 0]:0x2 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f914923ffb2c465cbaa8ab8aac73bdf9 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f914923ffb2c465cbaa8ab8aac73bdf9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:38.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.158 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I addf4707-f7dc-4c42-86b5-f2940958dc59 -a 10.0.0.2 -s 4420 -i 4 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:38.723 01:02:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:41.250 [ 0]:0x1 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84cf001eee17484482868514f3bda1d2 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84cf001eee17484482868514f3bda1d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:41.250 [ 1]:0x2 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f914923ffb2c465cbaa8ab8aac73bdf9 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f914923ffb2c465cbaa8ab8aac73bdf9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.250 01:02:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.250 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:41.251 [ 0]:0x2 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f914923ffb2c465cbaa8ab8aac73bdf9 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f914923ffb2c465cbaa8ab8aac73bdf9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:41.251 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:41.509 [2024-07-16 01:02:57.433844] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:41.509 request: 00:09:41.509 { 00:09:41.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.509 "nsid": 2, 00:09:41.509 "host": "nqn.2016-06.io.spdk:host1", 00:09:41.509 "method": "nvmf_ns_remove_host", 00:09:41.509 "req_id": 1 00:09:41.509 } 00:09:41.509 Got JSON-RPC error response 00:09:41.509 response: 00:09:41.509 { 00:09:41.509 "code": -32602, 00:09:41.509 "message": "Invalid parameters" 00:09:41.509 } 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:41.509 [ 0]:0x2 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:41.509 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f914923ffb2c465cbaa8ab8aac73bdf9 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f914923ffb2c465cbaa8ab8aac73bdf9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4089602 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4089602 /var/tmp/host.sock 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4089602 ']' 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:41.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.768 01:02:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:41.768 [2024-07-16 01:02:57.623728] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:09:41.768 [2024-07-16 01:02:57.623817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089602 ] 00:09:41.768 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.768 [2024-07-16 01:02:57.683293] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.026 [2024-07-16 01:02:57.794407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.285 01:02:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.285 01:02:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:42.285 01:02:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.285 01:02:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:42.543 01:02:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7e3e8ce2-4f8a-4407-95c7-65d7f38e9a26 00:09:42.543 01:02:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:42.543 01:02:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7E3E8CE24F8A440795C765D7F38E9A26 -i 00:09:42.801 01:02:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7cceb329-bdec-4582-ac81-e35b76e17bb0 00:09:42.801 01:02:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:42.801 01:02:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7CCEB329BDEC4582AC81E35B76E17BB0 -i 00:09:43.059 01:02:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:43.317 01:02:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:43.575 01:02:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:43.575 01:02:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:44.153 nvme0n1 00:09:44.153 01:02:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:44.153 01:02:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:44.415 nvme1n2 00:09:44.415 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:44.415 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:44.415 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:44.415 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:44.415 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:44.673 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:44.673 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:44.673 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:44.673 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:44.930 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7e3e8ce2-4f8a-4407-95c7-65d7f38e9a26 == \7\e\3\e\8\c\e\2\-\4\f\8\a\-\4\4\0\7\-\9\5\c\7\-\6\5\d\7\f\3\8\e\9\a\2\6 ]] 00:09:44.930 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:44.930 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:44.930 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:45.187 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7cceb329-bdec-4582-ac81-e35b76e17bb0 == \7\c\c\e\b\3\2\9\-\b\d\e\c\-\4\5\8\2\-\a\c\8\1\-\e\3\5\b\7\6\e\1\7\b\b\0 ]] 00:09:45.187 01:03:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4089602 00:09:45.187 01:03:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4089602 ']' 00:09:45.187 01:03:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4089602 00:09:45.187 01:03:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:45.187 01:03:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.187 01:03:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4089602 00:09:45.187 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:45.187 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:45.187 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4089602' 00:09:45.187 killing process with pid 4089602 00:09:45.187 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4089602 00:09:45.187 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4089602 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.751 rmmod nvme_tcp 00:09:45.751 rmmod nvme_fabrics 00:09:45.751 rmmod nvme_keyring 00:09:45.751 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4088109 ']' 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4088109 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4088109 ']' 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4088109 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4088109 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4088109' 00:09:46.009 killing process with pid 4088109 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4088109 00:09:46.009 01:03:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4088109 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.267 01:03:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.169 01:03:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.169 00:09:48.169 real 0m20.670s 00:09:48.169 user 0m26.571s 00:09:48.169 sys 0m4.235s 00:09:48.169 01:03:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.169 01:03:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:48.169 ************************************ 00:09:48.169 END TEST nvmf_ns_masking 00:09:48.169 ************************************ 00:09:48.169 01:03:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:48.169 01:03:04 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:48.169 01:03:04 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:48.169 01:03:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:48.169 01:03:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.169 01:03:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.428 ************************************ 00:09:48.428 START TEST nvmf_nvme_cli 00:09:48.428 ************************************ 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:48.428 * Looking for test storage... 00:09:48.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.428 01:03:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:50.959 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:50.960 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:50.960 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:50.960 Found net devices under 0000:09:00.0: cvl_0_0 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:50.960 Found net devices under 0000:09:00.1: cvl_0_1 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:09:50.960 00:09:50.960 --- 10.0.0.2 ping statistics --- 00:09:50.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.960 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:50.960 00:09:50.960 --- 10.0.0.1 ping statistics --- 00:09:50.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.960 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4092210 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4092210 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4092210 ']' 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:50.960 [2024-07-16 01:03:06.618670] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:09:50.960 [2024-07-16 01:03:06.618754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.960 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.960 [2024-07-16 01:03:06.682161] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.960 [2024-07-16 01:03:06.796164] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.960 [2024-07-16 01:03:06.796218] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.960 [2024-07-16 01:03:06.796247] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.960 [2024-07-16 01:03:06.796270] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.960 [2024-07-16 01:03:06.796281] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.960 [2024-07-16 01:03:06.796371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.960 [2024-07-16 01:03:06.796429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.960 [2024-07-16 01:03:06.796455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.960 [2024-07-16 01:03:06.796467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.960 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.221 01:03:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 [2024-07-16 01:03:06.961862] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 Malloc0 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 Malloc1 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 [2024-07-16 01:03:07.044223] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:09:51.222 00:09:51.222 Discovery Log Number of Records 2, Generation counter 2 00:09:51.222 =====Discovery Log Entry 0====== 00:09:51.222 trtype: tcp 00:09:51.222 adrfam: ipv4 00:09:51.222 subtype: current discovery subsystem 00:09:51.222 treq: not required 00:09:51.222 portid: 0 00:09:51.222 trsvcid: 4420 00:09:51.222 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:51.222 traddr: 10.0.0.2 00:09:51.222 eflags: explicit discovery connections, duplicate discovery information 00:09:51.222 sectype: none 00:09:51.222 =====Discovery Log Entry 1====== 00:09:51.222 trtype: tcp 00:09:51.222 adrfam: ipv4 00:09:51.222 subtype: nvme subsystem 00:09:51.222 treq: not required 00:09:51.222 portid: 0 00:09:51.222 trsvcid: 4420 00:09:51.222 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:51.222 traddr: 10.0.0.2 00:09:51.222 eflags: none 00:09:51.222 sectype: none 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:51.222 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:51.822 01:03:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:51.822 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.822 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.822 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:51.822 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:51.822 01:03:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:54.346 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:54.347 /dev/nvme0n1 ]] 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.347 rmmod nvme_tcp 00:09:54.347 rmmod nvme_fabrics 00:09:54.347 rmmod nvme_keyring 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4092210 ']' 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4092210 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4092210 ']' 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4092210 00:09:54.347 01:03:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4092210 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4092210' 00:09:54.347 killing process with pid 4092210 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4092210 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4092210 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.347 01:03:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.908 01:03:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.908 00:09:56.908 real 0m8.219s 00:09:56.908 user 0m14.572s 00:09:56.908 sys 0m2.273s 00:09:56.908 01:03:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.908 01:03:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:56.908 ************************************ 00:09:56.908 END TEST nvmf_nvme_cli 00:09:56.908 ************************************ 00:09:56.908 01:03:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:56.908 01:03:12 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:56.908 01:03:12 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:56.908 01:03:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.908 01:03:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.908 01:03:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.908 ************************************ 00:09:56.908 START TEST nvmf_vfio_user 00:09:56.908 ************************************ 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:56.908 * Looking for test storage... 00:09:56.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4093612 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4093612' 00:09:56.908 Process pid: 4093612 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4093612 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4093612 ']' 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:56.908 [2024-07-16 01:03:12.561926] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:09:56.908 [2024-07-16 01:03:12.562057] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.908 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.908 [2024-07-16 01:03:12.619735] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.908 [2024-07-16 01:03:12.727540] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.908 [2024-07-16 01:03:12.727599] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.908 [2024-07-16 01:03:12.727627] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.908 [2024-07-16 01:03:12.727638] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.908 [2024-07-16 01:03:12.727648] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.908 [2024-07-16 01:03:12.727709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.908 [2024-07-16 01:03:12.727804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.908 [2024-07-16 01:03:12.727851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.908 [2024-07-16 01:03:12.727854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:56.908 01:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:58.277 01:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:58.277 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:58.277 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:58.277 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:58.277 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:58.277 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:58.534 Malloc1 00:09:58.535 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:58.792 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:59.051 01:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:59.308 01:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:59.308 01:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:59.308 01:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:59.566 Malloc2 00:09:59.566 01:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:59.823 01:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:00.080 01:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:00.339 01:03:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:00.339 01:03:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:00.339 01:03:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:00.339 01:03:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:00.339 01:03:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:00.339 01:03:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:00.339 [2024-07-16 01:03:16.208378] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:10:00.339 [2024-07-16 01:03:16.208422] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4094057 ] 00:10:00.339 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.339 [2024-07-16 01:03:16.243331] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:00.339 [2024-07-16 01:03:16.251457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:00.339 [2024-07-16 01:03:16.251484] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f88c61e8000 00:10:00.339 [2024-07-16 01:03:16.252453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.253449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.254449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.255459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.256461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.257468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.258476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.259481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:00.339 [2024-07-16 01:03:16.260491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:00.339 [2024-07-16 01:03:16.260512] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f88c61dd000 00:10:00.339 [2024-07-16 01:03:16.261652] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:00.339 [2024-07-16 01:03:16.277609] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:00.339 [2024-07-16 01:03:16.277646] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:00.339 [2024-07-16 01:03:16.282610] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:00.340 [2024-07-16 01:03:16.282667] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:00.340 [2024-07-16 01:03:16.282771] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:00.340 [2024-07-16 01:03:16.282804] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:00.340 [2024-07-16 01:03:16.282814] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:00.340 [2024-07-16 01:03:16.283603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:00.340 [2024-07-16 01:03:16.283623] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:00.340 [2024-07-16 01:03:16.283635] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:00.340 [2024-07-16 01:03:16.284607] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:00.340 [2024-07-16 01:03:16.284624] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:00.340 [2024-07-16 01:03:16.284637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:00.340 [2024-07-16 01:03:16.285612] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:00.340 [2024-07-16 01:03:16.285631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:00.340 [2024-07-16 01:03:16.286618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:00.340 [2024-07-16 01:03:16.286638] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:00.340 [2024-07-16 01:03:16.286647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:00.340 [2024-07-16 01:03:16.286658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:00.340 [2024-07-16 01:03:16.286767] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:00.340 [2024-07-16 01:03:16.286775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:00.340 [2024-07-16 01:03:16.286783] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:00.340 [2024-07-16 01:03:16.287625] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:00.340 [2024-07-16 01:03:16.288627] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:00.340 [2024-07-16 01:03:16.289633] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:00.340 [2024-07-16 01:03:16.290626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:00.340 [2024-07-16 01:03:16.290735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:00.340 [2024-07-16 01:03:16.291647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:00.340 [2024-07-16 01:03:16.291665] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:00.340 [2024-07-16 01:03:16.291673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.291697] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:00.340 [2024-07-16 01:03:16.291709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.291740] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:00.340 [2024-07-16 01:03:16.291750] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:00.340 [2024-07-16 01:03:16.291772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:00.340 [2024-07-16 01:03:16.291826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:00.340 [2024-07-16 01:03:16.291843] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:00.340 [2024-07-16 01:03:16.291851] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:00.340 [2024-07-16 01:03:16.291858] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:00.340 [2024-07-16 01:03:16.291865] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:00.340 [2024-07-16 01:03:16.291872] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:00.340 [2024-07-16 01:03:16.291880] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:00.340 [2024-07-16 01:03:16.291887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.291900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.291920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:00.340 [2024-07-16 01:03:16.291952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:00.340 [2024-07-16 01:03:16.291979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.340 [2024-07-16 01:03:16.291997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.340 [2024-07-16 01:03:16.292010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.340 [2024-07-16 01:03:16.292022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.340 [2024-07-16 01:03:16.292030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:00.340 [2024-07-16 01:03:16.292074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:00.340 [2024-07-16 01:03:16.292085] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:00.340 [2024-07-16 01:03:16.292093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:00.340 [2024-07-16 01:03:16.292147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:00.340 [2024-07-16 01:03:16.292217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292262] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:00.340 [2024-07-16 01:03:16.292270] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:00.340 [2024-07-16 01:03:16.292280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:00.340 [2024-07-16 01:03:16.292294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:00.340 [2024-07-16 01:03:16.292328] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:00.340 [2024-07-16 01:03:16.292350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292378] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:00.340 [2024-07-16 01:03:16.292385] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:00.340 [2024-07-16 01:03:16.292394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:00.340 [2024-07-16 01:03:16.292416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:00.340 [2024-07-16 01:03:16.292442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292469] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:00.340 [2024-07-16 01:03:16.292477] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:00.340 [2024-07-16 01:03:16.292486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:00.340 [2024-07-16 01:03:16.292496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:00.340 [2024-07-16 01:03:16.292510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:00.340 [2024-07-16 01:03:16.292570] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:00.341 [2024-07-16 01:03:16.292577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:00.341 [2024-07-16 01:03:16.292585] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:00.341 [2024-07-16 01:03:16.292615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:00.341 [2024-07-16 01:03:16.292633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:00.341 [2024-07-16 01:03:16.292652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:00.341 [2024-07-16 01:03:16.292663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:00.341 [2024-07-16 01:03:16.292678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:00.341 [2024-07-16 01:03:16.292689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:00.341 [2024-07-16 01:03:16.292704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:00.341 [2024-07-16 01:03:16.292715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:00.341 [2024-07-16 01:03:16.292738] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:00.341 [2024-07-16 01:03:16.292747] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:00.341 [2024-07-16 01:03:16.292757] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:00.341 [2024-07-16 01:03:16.292763] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:00.341 [2024-07-16 01:03:16.292772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:00.341 [2024-07-16 01:03:16.292783] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:00.341 [2024-07-16 01:03:16.292791] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:00.341 [2024-07-16 01:03:16.292799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:00.341 [2024-07-16 01:03:16.292809] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:00.341 [2024-07-16 01:03:16.292817] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:00.341 [2024-07-16 01:03:16.292825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:00.341 [2024-07-16 01:03:16.292837] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:00.341 [2024-07-16 01:03:16.292844] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:00.341 [2024-07-16 01:03:16.292853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:00.341 [2024-07-16 01:03:16.292863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:00.341 [2024-07-16 01:03:16.292882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:00.341 [2024-07-16 01:03:16.292901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:00.341 [2024-07-16 01:03:16.292913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:00.341 ===================================================== 00:10:00.341 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:00.341 ===================================================== 00:10:00.341 Controller Capabilities/Features 00:10:00.341 ================================ 00:10:00.341 Vendor ID: 4e58 00:10:00.341 Subsystem Vendor ID: 4e58 00:10:00.341 Serial Number: SPDK1 00:10:00.341 Model Number: SPDK bdev Controller 00:10:00.341 Firmware Version: 24.09 00:10:00.341 Recommended Arb Burst: 6 00:10:00.341 IEEE OUI Identifier: 8d 6b 50 00:10:00.341 Multi-path I/O 00:10:00.341 May have multiple subsystem ports: Yes 00:10:00.341 May have multiple controllers: Yes 00:10:00.341 Associated with SR-IOV VF: No 00:10:00.341 Max Data Transfer Size: 131072 00:10:00.341 Max Number of Namespaces: 32 00:10:00.341 Max Number of I/O Queues: 127 00:10:00.341 NVMe Specification Version (VS): 1.3 00:10:00.341 NVMe Specification Version (Identify): 1.3 00:10:00.341 Maximum Queue Entries: 256 00:10:00.341 Contiguous Queues Required: Yes 00:10:00.341 Arbitration Mechanisms Supported 00:10:00.341 Weighted Round Robin: Not Supported 00:10:00.341 Vendor Specific: Not Supported 00:10:00.341 Reset Timeout: 15000 ms 00:10:00.341 Doorbell Stride: 4 bytes 00:10:00.341 NVM Subsystem Reset: Not Supported 00:10:00.341 Command Sets Supported 00:10:00.341 NVM Command Set: Supported 00:10:00.341 Boot Partition: Not Supported 00:10:00.341 Memory Page Size Minimum: 4096 bytes 00:10:00.341 Memory Page Size Maximum: 4096 bytes 00:10:00.341 Persistent Memory Region: Not Supported 00:10:00.341 Optional Asynchronous Events Supported 00:10:00.341 Namespace Attribute Notices: Supported 00:10:00.341 Firmware Activation Notices: Not Supported 00:10:00.341 ANA Change Notices: Not Supported 00:10:00.341 PLE Aggregate Log Change Notices: Not Supported 00:10:00.341 LBA Status Info Alert Notices: Not Supported 00:10:00.341 EGE Aggregate Log Change Notices: Not Supported 00:10:00.341 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.341 Zone Descriptor Change Notices: Not Supported 00:10:00.341 Discovery Log Change Notices: Not Supported 00:10:00.341 Controller Attributes 00:10:00.341 128-bit Host Identifier: Supported 00:10:00.341 Non-Operational Permissive Mode: Not Supported 00:10:00.341 NVM Sets: Not Supported 00:10:00.341 Read Recovery Levels: Not Supported 00:10:00.341 Endurance Groups: Not Supported 00:10:00.341 Predictable Latency Mode: Not Supported 00:10:00.341 Traffic Based Keep ALive: Not Supported 00:10:00.341 Namespace Granularity: Not Supported 00:10:00.341 SQ Associations: Not Supported 00:10:00.341 UUID List: Not Supported 00:10:00.341 Multi-Domain Subsystem: Not Supported 00:10:00.341 Fixed Capacity Management: Not Supported 00:10:00.341 Variable Capacity Management: Not Supported 00:10:00.341 Delete Endurance Group: Not Supported 00:10:00.341 Delete NVM Set: Not Supported 00:10:00.341 Extended LBA Formats Supported: Not Supported 00:10:00.341 Flexible Data Placement Supported: Not Supported 00:10:00.341 00:10:00.341 Controller Memory Buffer Support 00:10:00.341 ================================ 00:10:00.341 Supported: No 00:10:00.341 00:10:00.341 Persistent Memory Region Support 00:10:00.341 ================================ 00:10:00.341 Supported: No 00:10:00.341 00:10:00.341 Admin Command Set Attributes 00:10:00.341 ============================ 00:10:00.341 Security Send/Receive: Not Supported 00:10:00.341 Format NVM: Not Supported 00:10:00.341 Firmware Activate/Download: Not Supported 00:10:00.341 Namespace Management: Not Supported 00:10:00.341 Device Self-Test: Not Supported 00:10:00.341 Directives: Not Supported 00:10:00.341 NVMe-MI: Not Supported 00:10:00.341 Virtualization Management: Not Supported 00:10:00.341 Doorbell Buffer Config: Not Supported 00:10:00.341 Get LBA Status Capability: Not Supported 00:10:00.341 Command & Feature Lockdown Capability: Not Supported 00:10:00.341 Abort Command Limit: 4 00:10:00.341 Async Event Request Limit: 4 00:10:00.341 Number of Firmware Slots: N/A 00:10:00.341 Firmware Slot 1 Read-Only: N/A 00:10:00.341 Firmware Activation Without Reset: N/A 00:10:00.341 Multiple Update Detection Support: N/A 00:10:00.341 Firmware Update Granularity: No Information Provided 00:10:00.341 Per-Namespace SMART Log: No 00:10:00.341 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.341 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:00.341 Command Effects Log Page: Supported 00:10:00.341 Get Log Page Extended Data: Supported 00:10:00.341 Telemetry Log Pages: Not Supported 00:10:00.341 Persistent Event Log Pages: Not Supported 00:10:00.341 Supported Log Pages Log Page: May Support 00:10:00.341 Commands Supported & Effects Log Page: Not Supported 00:10:00.341 Feature Identifiers & Effects Log Page:May Support 00:10:00.341 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.341 Data Area 4 for Telemetry Log: Not Supported 00:10:00.342 Error Log Page Entries Supported: 128 00:10:00.342 Keep Alive: Supported 00:10:00.342 Keep Alive Granularity: 10000 ms 00:10:00.342 00:10:00.342 NVM Command Set Attributes 00:10:00.342 ========================== 00:10:00.342 Submission Queue Entry Size 00:10:00.342 Max: 64 00:10:00.342 Min: 64 00:10:00.342 Completion Queue Entry Size 00:10:00.342 Max: 16 00:10:00.342 Min: 16 00:10:00.342 Number of Namespaces: 32 00:10:00.342 Compare Command: Supported 00:10:00.342 Write Uncorrectable Command: Not Supported 00:10:00.342 Dataset Management Command: Supported 00:10:00.342 Write Zeroes Command: Supported 00:10:00.342 Set Features Save Field: Not Supported 00:10:00.342 Reservations: Not Supported 00:10:00.342 Timestamp: Not Supported 00:10:00.342 Copy: Supported 00:10:00.342 Volatile Write Cache: Present 00:10:00.342 Atomic Write Unit (Normal): 1 00:10:00.342 Atomic Write Unit (PFail): 1 00:10:00.342 Atomic Compare & Write Unit: 1 00:10:00.342 Fused Compare & Write: Supported 00:10:00.342 Scatter-Gather List 00:10:00.342 SGL Command Set: Supported (Dword aligned) 00:10:00.342 SGL Keyed: Not Supported 00:10:00.342 SGL Bit Bucket Descriptor: Not Supported 00:10:00.342 SGL Metadata Pointer: Not Supported 00:10:00.342 Oversized SGL: Not Supported 00:10:00.342 SGL Metadata Address: Not Supported 00:10:00.342 SGL Offset: Not Supported 00:10:00.342 Transport SGL Data Block: Not Supported 00:10:00.342 Replay Protected Memory Block: Not Supported 00:10:00.342 00:10:00.342 Firmware Slot Information 00:10:00.342 ========================= 00:10:00.342 Active slot: 1 00:10:00.342 Slot 1 Firmware Revision: 24.09 00:10:00.342 00:10:00.342 00:10:00.342 Commands Supported and Effects 00:10:00.342 ============================== 00:10:00.342 Admin Commands 00:10:00.342 -------------- 00:10:00.342 Get Log Page (02h): Supported 00:10:00.342 Identify (06h): Supported 00:10:00.342 Abort (08h): Supported 00:10:00.342 Set Features (09h): Supported 00:10:00.342 Get Features (0Ah): Supported 00:10:00.342 Asynchronous Event Request (0Ch): Supported 00:10:00.342 Keep Alive (18h): Supported 00:10:00.342 I/O Commands 00:10:00.342 ------------ 00:10:00.342 Flush (00h): Supported LBA-Change 00:10:00.342 Write (01h): Supported LBA-Change 00:10:00.342 Read (02h): Supported 00:10:00.342 Compare (05h): Supported 00:10:00.342 Write Zeroes (08h): Supported LBA-Change 00:10:00.342 Dataset Management (09h): Supported LBA-Change 00:10:00.342 Copy (19h): Supported LBA-Change 00:10:00.342 00:10:00.342 Error Log 00:10:00.342 ========= 00:10:00.342 00:10:00.342 Arbitration 00:10:00.342 =========== 00:10:00.342 Arbitration Burst: 1 00:10:00.342 00:10:00.342 Power Management 00:10:00.342 ================ 00:10:00.342 Number of Power States: 1 00:10:00.342 Current Power State: Power State #0 00:10:00.342 Power State #0: 00:10:00.342 Max Power: 0.00 W 00:10:00.342 Non-Operational State: Operational 00:10:00.342 Entry Latency: Not Reported 00:10:00.342 Exit Latency: Not Reported 00:10:00.342 Relative Read Throughput: 0 00:10:00.342 Relative Read Latency: 0 00:10:00.342 Relative Write Throughput: 0 00:10:00.342 Relative Write Latency: 0 00:10:00.342 Idle Power: Not Reported 00:10:00.342 Active Power: Not Reported 00:10:00.342 Non-Operational Permissive Mode: Not Supported 00:10:00.342 00:10:00.342 Health Information 00:10:00.342 ================== 00:10:00.342 Critical Warnings: 00:10:00.342 Available Spare Space: OK 00:10:00.342 Temperature: OK 00:10:00.342 Device Reliability: OK 00:10:00.342 Read Only: No 00:10:00.342 Volatile Memory Backup: OK 00:10:00.342 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:00.342 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:00.342 Available Spare: 0% 00:10:00.342 Available Sp[2024-07-16 01:03:16.293064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:00.342 [2024-07-16 01:03:16.293081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:00.342 [2024-07-16 01:03:16.293128] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:00.342 [2024-07-16 01:03:16.293146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:00.342 [2024-07-16 01:03:16.293157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:00.342 [2024-07-16 01:03:16.293167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:00.342 [2024-07-16 01:03:16.293177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:00.342 [2024-07-16 01:03:16.296969] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:00.342 [2024-07-16 01:03:16.296994] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:00.342 [2024-07-16 01:03:16.297679] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:00.342 [2024-07-16 01:03:16.297764] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:00.342 [2024-07-16 01:03:16.297778] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:00.342 [2024-07-16 01:03:16.298693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:00.342 [2024-07-16 01:03:16.298716] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:00.342 [2024-07-16 01:03:16.298773] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:00.342 [2024-07-16 01:03:16.300735] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:00.600 are Threshold: 0% 00:10:00.600 Life Percentage Used: 0% 00:10:00.600 Data Units Read: 0 00:10:00.600 Data Units Written: 0 00:10:00.600 Host Read Commands: 0 00:10:00.600 Host Write Commands: 0 00:10:00.600 Controller Busy Time: 0 minutes 00:10:00.600 Power Cycles: 0 00:10:00.600 Power On Hours: 0 hours 00:10:00.600 Unsafe Shutdowns: 0 00:10:00.600 Unrecoverable Media Errors: 0 00:10:00.600 Lifetime Error Log Entries: 0 00:10:00.600 Warning Temperature Time: 0 minutes 00:10:00.600 Critical Temperature Time: 0 minutes 00:10:00.600 00:10:00.600 Number of Queues 00:10:00.600 ================ 00:10:00.600 Number of I/O Submission Queues: 127 00:10:00.600 Number of I/O Completion Queues: 127 00:10:00.600 00:10:00.600 Active Namespaces 00:10:00.600 ================= 00:10:00.600 Namespace ID:1 00:10:00.600 Error Recovery Timeout: Unlimited 00:10:00.600 Command Set Identifier: NVM (00h) 00:10:00.600 Deallocate: Supported 00:10:00.600 Deallocated/Unwritten Error: Not Supported 00:10:00.600 Deallocated Read Value: Unknown 00:10:00.600 Deallocate in Write Zeroes: Not Supported 00:10:00.600 Deallocated Guard Field: 0xFFFF 00:10:00.600 Flush: Supported 00:10:00.600 Reservation: Supported 00:10:00.600 Namespace Sharing Capabilities: Multiple Controllers 00:10:00.600 Size (in LBAs): 131072 (0GiB) 00:10:00.600 Capacity (in LBAs): 131072 (0GiB) 00:10:00.600 Utilization (in LBAs): 131072 (0GiB) 00:10:00.600 NGUID: 7E0C3779AA834F1184159776B8CE81CE 00:10:00.600 UUID: 7e0c3779-aa83-4f11-8415-9776b8ce81ce 00:10:00.600 Thin Provisioning: Not Supported 00:10:00.600 Per-NS Atomic Units: Yes 00:10:00.600 Atomic Boundary Size (Normal): 0 00:10:00.600 Atomic Boundary Size (PFail): 0 00:10:00.600 Atomic Boundary Offset: 0 00:10:00.600 Maximum Single Source Range Length: 65535 00:10:00.600 Maximum Copy Length: 65535 00:10:00.600 Maximum Source Range Count: 1 00:10:00.600 NGUID/EUI64 Never Reused: No 00:10:00.600 Namespace Write Protected: No 00:10:00.600 Number of LBA Formats: 1 00:10:00.600 Current LBA Format: LBA Format #00 00:10:00.600 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.600 00:10:00.600 01:03:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:00.600 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.600 [2024-07-16 01:03:16.532777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:05.864 Initializing NVMe Controllers 00:10:05.864 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:05.864 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:05.864 Initialization complete. Launching workers. 00:10:05.864 ======================================================== 00:10:05.864 Latency(us) 00:10:05.864 Device Information : IOPS MiB/s Average min max 00:10:05.864 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34500.20 134.77 3710.30 1159.11 9144.74 00:10:05.864 ======================================================== 00:10:05.864 Total : 34500.20 134.77 3710.30 1159.11 9144.74 00:10:05.864 00:10:05.864 [2024-07-16 01:03:21.555604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:05.864 01:03:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:05.864 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.864 [2024-07-16 01:03:21.797740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:11.125 Initializing NVMe Controllers 00:10:11.125 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:11.125 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:11.125 Initialization complete. Launching workers. 00:10:11.125 ======================================================== 00:10:11.125 Latency(us) 00:10:11.126 Device Information : IOPS MiB/s Average min max 00:10:11.126 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7995.58 6970.05 11977.01 00:10:11.126 ======================================================== 00:10:11.126 Total : 16025.60 62.60 7995.58 6970.05 11977.01 00:10:11.126 00:10:11.126 [2024-07-16 01:03:26.834130] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:11.126 01:03:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:11.126 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.126 [2024-07-16 01:03:27.045202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:16.382 [2024-07-16 01:03:32.119334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:16.382 Initializing NVMe Controllers 00:10:16.382 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:16.382 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:16.382 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:16.382 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:16.382 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:16.382 Initialization complete. Launching workers. 00:10:16.382 Starting thread on core 2 00:10:16.382 Starting thread on core 3 00:10:16.382 Starting thread on core 1 00:10:16.382 01:03:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:16.382 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.641 [2024-07-16 01:03:32.432429] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:19.960 [2024-07-16 01:03:35.499095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:19.960 Initializing NVMe Controllers 00:10:19.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:19.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:19.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:19.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:19.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:19.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:19.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:19.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:19.960 Initialization complete. Launching workers. 00:10:19.960 Starting thread on core 1 with urgent priority queue 00:10:19.960 Starting thread on core 2 with urgent priority queue 00:10:19.960 Starting thread on core 3 with urgent priority queue 00:10:19.960 Starting thread on core 0 with urgent priority queue 00:10:19.960 SPDK bdev Controller (SPDK1 ) core 0: 5229.67 IO/s 19.12 secs/100000 ios 00:10:19.960 SPDK bdev Controller (SPDK1 ) core 1: 5041.33 IO/s 19.84 secs/100000 ios 00:10:19.960 SPDK bdev Controller (SPDK1 ) core 2: 5138.67 IO/s 19.46 secs/100000 ios 00:10:19.960 SPDK bdev Controller (SPDK1 ) core 3: 4977.33 IO/s 20.09 secs/100000 ios 00:10:19.960 ======================================================== 00:10:19.960 00:10:19.960 01:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:19.960 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.960 [2024-07-16 01:03:35.799507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:19.960 Initializing NVMe Controllers 00:10:19.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:19.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:19.960 Namespace ID: 1 size: 0GB 00:10:19.960 Initialization complete. 00:10:19.960 INFO: using host memory buffer for IO 00:10:19.960 Hello world! 00:10:19.960 [2024-07-16 01:03:35.834084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:19.960 01:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:19.960 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.217 [2024-07-16 01:03:36.121460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:21.150 Initializing NVMe Controllers 00:10:21.150 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:21.150 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:21.150 Initialization complete. Launching workers. 00:10:21.150 submit (in ns) avg, min, max = 6689.1, 3513.3, 4015882.2 00:10:21.150 complete (in ns) avg, min, max = 28261.3, 2081.1, 4998558.9 00:10:21.150 00:10:21.150 Submit histogram 00:10:21.150 ================ 00:10:21.150 Range in us Cumulative Count 00:10:21.150 3.508 - 3.532: 0.2507% ( 33) 00:10:21.150 3.532 - 3.556: 1.1243% ( 115) 00:10:21.150 3.556 - 3.579: 4.4211% ( 434) 00:10:21.150 3.579 - 3.603: 8.7132% ( 565) 00:10:21.150 3.603 - 3.627: 16.8642% ( 1073) 00:10:21.150 3.627 - 3.650: 24.9012% ( 1058) 00:10:21.150 3.650 - 3.674: 33.3409% ( 1111) 00:10:21.150 3.674 - 3.698: 39.9499% ( 870) 00:10:21.150 3.698 - 3.721: 47.4476% ( 987) 00:10:21.150 3.721 - 3.745: 52.8639% ( 713) 00:10:21.150 3.745 - 3.769: 57.8775% ( 660) 00:10:21.150 3.769 - 3.793: 61.9189% ( 532) 00:10:21.150 3.793 - 3.816: 65.4132% ( 460) 00:10:21.150 3.816 - 3.840: 69.2647% ( 507) 00:10:21.150 3.840 - 3.864: 73.4807% ( 555) 00:10:21.150 3.864 - 3.887: 77.5980% ( 542) 00:10:21.150 3.887 - 3.911: 81.0772% ( 458) 00:10:21.150 3.911 - 3.935: 84.0398% ( 390) 00:10:21.150 3.935 - 3.959: 86.2884% ( 296) 00:10:21.150 3.959 - 3.982: 88.1267% ( 242) 00:10:21.150 3.982 - 4.006: 89.5776% ( 191) 00:10:21.150 4.006 - 4.030: 90.8994% ( 174) 00:10:21.150 4.030 - 4.053: 91.9325% ( 136) 00:10:21.150 4.053 - 4.077: 92.8897% ( 126) 00:10:21.150 4.077 - 4.101: 93.7861% ( 118) 00:10:21.150 4.101 - 4.124: 94.5153% ( 96) 00:10:21.150 4.124 - 4.148: 95.1534% ( 84) 00:10:21.150 4.148 - 4.172: 95.6472% ( 65) 00:10:21.150 4.172 - 4.196: 95.9207% ( 36) 00:10:21.150 4.196 - 4.219: 96.2321% ( 41) 00:10:21.150 4.219 - 4.243: 96.3917% ( 21) 00:10:21.150 4.243 - 4.267: 96.5740% ( 24) 00:10:21.150 4.267 - 4.290: 96.6576% ( 11) 00:10:21.150 4.290 - 4.314: 96.7563% ( 13) 00:10:21.150 4.314 - 4.338: 96.8475% ( 12) 00:10:21.150 4.338 - 4.361: 96.9614% ( 15) 00:10:21.150 4.361 - 4.385: 97.0374% ( 10) 00:10:21.150 4.385 - 4.409: 97.0905% ( 7) 00:10:21.150 4.409 - 4.433: 97.1589% ( 9) 00:10:21.150 4.433 - 4.456: 97.2197% ( 8) 00:10:21.150 4.456 - 4.480: 97.2425% ( 3) 00:10:21.150 4.480 - 4.504: 97.2577% ( 2) 00:10:21.150 4.504 - 4.527: 97.2805% ( 3) 00:10:21.150 4.527 - 4.551: 97.3033% ( 3) 00:10:21.150 4.575 - 4.599: 97.3184% ( 2) 00:10:21.150 4.646 - 4.670: 97.3260% ( 1) 00:10:21.150 4.670 - 4.693: 97.3412% ( 2) 00:10:21.150 4.693 - 4.717: 97.3640% ( 3) 00:10:21.150 4.717 - 4.741: 97.3944% ( 4) 00:10:21.150 4.741 - 4.764: 97.4248% ( 4) 00:10:21.150 4.764 - 4.788: 97.4552% ( 4) 00:10:21.150 4.788 - 4.812: 97.4628% ( 1) 00:10:21.150 4.812 - 4.836: 97.5008% ( 5) 00:10:21.150 4.859 - 4.883: 97.5387% ( 5) 00:10:21.150 4.883 - 4.907: 97.5767% ( 5) 00:10:21.150 4.907 - 4.930: 97.6527% ( 10) 00:10:21.150 4.930 - 4.954: 97.6831% ( 4) 00:10:21.150 4.954 - 4.978: 97.7438% ( 8) 00:10:21.150 4.978 - 5.001: 97.7894% ( 6) 00:10:21.150 5.001 - 5.025: 97.8426% ( 7) 00:10:21.150 5.025 - 5.049: 97.8806% ( 5) 00:10:21.150 5.049 - 5.073: 97.9338% ( 7) 00:10:21.150 5.073 - 5.096: 97.9414% ( 1) 00:10:21.150 5.096 - 5.120: 97.9793% ( 5) 00:10:21.150 5.120 - 5.144: 98.0097% ( 4) 00:10:21.150 5.144 - 5.167: 98.0401% ( 4) 00:10:21.150 5.167 - 5.191: 98.0705% ( 4) 00:10:21.150 5.191 - 5.215: 98.0781% ( 1) 00:10:21.150 5.215 - 5.239: 98.1085% ( 4) 00:10:21.150 5.239 - 5.262: 98.1389% ( 4) 00:10:21.150 5.262 - 5.286: 98.1617% ( 3) 00:10:21.150 5.333 - 5.357: 98.1768% ( 2) 00:10:21.150 5.357 - 5.381: 98.1920% ( 2) 00:10:21.150 5.404 - 5.428: 98.1996% ( 1) 00:10:21.150 5.452 - 5.476: 98.2148% ( 2) 00:10:21.150 5.618 - 5.641: 98.2224% ( 1) 00:10:21.150 5.879 - 5.902: 98.2300% ( 1) 00:10:21.150 5.950 - 5.973: 98.2376% ( 1) 00:10:21.150 6.068 - 6.116: 98.2452% ( 1) 00:10:21.150 6.116 - 6.163: 98.2528% ( 1) 00:10:21.150 6.305 - 6.353: 98.2604% ( 1) 00:10:21.150 6.353 - 6.400: 98.2680% ( 1) 00:10:21.150 6.400 - 6.447: 98.2756% ( 1) 00:10:21.150 6.447 - 6.495: 98.2832% ( 1) 00:10:21.150 6.637 - 6.684: 98.2908% ( 1) 00:10:21.150 6.684 - 6.732: 98.2984% ( 1) 00:10:21.150 6.874 - 6.921: 98.3060% ( 1) 00:10:21.150 6.921 - 6.969: 98.3136% ( 1) 00:10:21.150 7.064 - 7.111: 98.3288% ( 2) 00:10:21.150 7.111 - 7.159: 98.3364% ( 1) 00:10:21.150 7.159 - 7.206: 98.3440% ( 1) 00:10:21.150 7.206 - 7.253: 98.3592% ( 2) 00:10:21.150 7.253 - 7.301: 98.3668% ( 1) 00:10:21.150 7.301 - 7.348: 98.3744% ( 1) 00:10:21.150 7.396 - 7.443: 98.3895% ( 2) 00:10:21.150 7.490 - 7.538: 98.4047% ( 2) 00:10:21.150 7.538 - 7.585: 98.4123% ( 1) 00:10:21.150 7.727 - 7.775: 98.4199% ( 1) 00:10:21.150 7.822 - 7.870: 98.4275% ( 1) 00:10:21.150 7.870 - 7.917: 98.4427% ( 2) 00:10:21.150 8.154 - 8.201: 98.4503% ( 1) 00:10:21.150 8.249 - 8.296: 98.4655% ( 2) 00:10:21.150 8.296 - 8.344: 98.4883% ( 3) 00:10:21.150 8.344 - 8.391: 98.4959% ( 1) 00:10:21.150 8.391 - 8.439: 98.5187% ( 3) 00:10:21.150 8.439 - 8.486: 98.5263% ( 1) 00:10:21.150 8.486 - 8.533: 98.5415% ( 2) 00:10:21.150 8.533 - 8.581: 98.5719% ( 4) 00:10:21.150 8.628 - 8.676: 98.5871% ( 2) 00:10:21.150 8.676 - 8.723: 98.5947% ( 1) 00:10:21.150 8.723 - 8.770: 98.6022% ( 1) 00:10:21.150 8.770 - 8.818: 98.6098% ( 1) 00:10:21.150 8.818 - 8.865: 98.6250% ( 2) 00:10:21.150 8.865 - 8.913: 98.6402% ( 2) 00:10:21.150 8.960 - 9.007: 98.6630% ( 3) 00:10:21.150 9.007 - 9.055: 98.6706% ( 1) 00:10:21.150 9.055 - 9.102: 98.6782% ( 1) 00:10:21.150 9.150 - 9.197: 98.6858% ( 1) 00:10:21.150 9.244 - 9.292: 98.6934% ( 1) 00:10:21.150 9.387 - 9.434: 98.7010% ( 1) 00:10:21.150 9.481 - 9.529: 98.7086% ( 1) 00:10:21.150 9.529 - 9.576: 98.7162% ( 1) 00:10:21.150 9.576 - 9.624: 98.7238% ( 1) 00:10:21.150 9.671 - 9.719: 98.7314% ( 1) 00:10:21.150 9.766 - 9.813: 98.7390% ( 1) 00:10:21.150 9.861 - 9.908: 98.7770% ( 5) 00:10:21.150 10.098 - 10.145: 98.7846% ( 1) 00:10:21.150 10.193 - 10.240: 98.7922% ( 1) 00:10:21.150 10.335 - 10.382: 98.7998% ( 1) 00:10:21.150 10.430 - 10.477: 98.8074% ( 1) 00:10:21.150 10.477 - 10.524: 98.8149% ( 1) 00:10:21.150 10.572 - 10.619: 98.8225% ( 1) 00:10:21.150 10.619 - 10.667: 98.8453% ( 3) 00:10:21.150 10.809 - 10.856: 98.8529% ( 1) 00:10:21.150 10.951 - 10.999: 98.8681% ( 2) 00:10:21.150 11.046 - 11.093: 98.8757% ( 1) 00:10:21.150 11.093 - 11.141: 98.8833% ( 1) 00:10:21.150 11.141 - 11.188: 98.8909% ( 1) 00:10:21.150 11.236 - 11.283: 98.8985% ( 1) 00:10:21.150 11.378 - 11.425: 98.9061% ( 1) 00:10:21.150 11.615 - 11.662: 98.9137% ( 1) 00:10:21.150 11.662 - 11.710: 98.9289% ( 2) 00:10:21.150 11.947 - 11.994: 98.9365% ( 1) 00:10:21.150 12.136 - 12.231: 98.9441% ( 1) 00:10:21.150 12.231 - 12.326: 98.9517% ( 1) 00:10:21.150 12.705 - 12.800: 98.9593% ( 1) 00:10:21.150 12.800 - 12.895: 98.9669% ( 1) 00:10:21.150 12.895 - 12.990: 98.9745% ( 1) 00:10:21.150 12.990 - 13.084: 98.9821% ( 1) 00:10:21.150 13.084 - 13.179: 98.9897% ( 1) 00:10:21.150 13.274 - 13.369: 99.0049% ( 2) 00:10:21.150 13.559 - 13.653: 99.0125% ( 1) 00:10:21.150 13.653 - 13.748: 99.0201% ( 1) 00:10:21.150 13.748 - 13.843: 99.0277% ( 1) 00:10:21.150 14.033 - 14.127: 99.0352% ( 1) 00:10:21.150 14.222 - 14.317: 99.0428% ( 1) 00:10:21.150 14.696 - 14.791: 99.0504% ( 1) 00:10:21.150 15.076 - 15.170: 99.0656% ( 2) 00:10:21.150 16.593 - 16.687: 99.0732% ( 1) 00:10:21.150 16.972 - 17.067: 99.0808% ( 1) 00:10:21.150 17.161 - 17.256: 99.0884% ( 1) 00:10:21.150 17.256 - 17.351: 99.0960% ( 1) 00:10:21.150 17.351 - 17.446: 99.1036% ( 1) 00:10:21.150 17.446 - 17.541: 99.1416% ( 5) 00:10:21.150 17.541 - 17.636: 99.1644% ( 3) 00:10:21.150 17.636 - 17.730: 99.2024% ( 5) 00:10:21.150 17.730 - 17.825: 99.2328% ( 4) 00:10:21.150 17.825 - 17.920: 99.3087% ( 10) 00:10:21.150 17.920 - 18.015: 99.3315% ( 3) 00:10:21.150 18.015 - 18.110: 99.3847% ( 7) 00:10:21.150 18.110 - 18.204: 99.4607% ( 10) 00:10:21.150 18.204 - 18.299: 99.5062% ( 6) 00:10:21.150 18.299 - 18.394: 99.5442% ( 5) 00:10:21.150 18.394 - 18.489: 99.5898% ( 6) 00:10:21.151 18.489 - 18.584: 99.6506% ( 8) 00:10:21.151 18.584 - 18.679: 99.7037% ( 7) 00:10:21.151 18.679 - 18.773: 99.7265% ( 3) 00:10:21.151 18.773 - 18.868: 99.7797% ( 7) 00:10:21.151 18.868 - 18.963: 99.8101% ( 4) 00:10:21.151 18.963 - 19.058: 99.8253% ( 2) 00:10:21.151 19.058 - 19.153: 99.8329% ( 1) 00:10:21.151 19.153 - 19.247: 99.8633% ( 4) 00:10:21.151 19.247 - 19.342: 99.8709% ( 1) 00:10:21.151 19.437 - 19.532: 99.8785% ( 1) 00:10:21.151 19.627 - 19.721: 99.8936% ( 2) 00:10:21.151 19.816 - 19.911: 99.9012% ( 1) 00:10:21.151 19.911 - 20.006: 99.9088% ( 1) 00:10:21.151 21.713 - 21.807: 99.9164% ( 1) 00:10:21.151 22.566 - 22.661: 99.9240% ( 1) 00:10:21.151 22.945 - 23.040: 99.9316% ( 1) 00:10:21.151 3980.705 - 4004.978: 99.9772% ( 6) 00:10:21.151 4004.978 - 4029.250: 100.0000% ( 3) 00:10:21.151 00:10:21.151 Complete histogram 00:10:21.151 ================== 00:10:21.151 Range in us Cumulative Count 00:10:21.151 2.074 - 2.086: 0.4330% ( 57) 00:10:21.151 2.086 - 2.098: 13.8560% ( 1767) 00:10:21.151 2.098 - 2.110: 20.6244% ( 891) 00:10:21.151 2.110 - 2.121: 29.1553% ( 1123) 00:10:21.151 2.121 - 2.133: 53.9654% ( 3266) 00:10:21.151 2.133 - 2.145: 58.0143% ( 533) 00:10:21.151 2.145 - 2.157: 60.8478% ( 373) 00:10:21.151 2.157 - 2.169: 66.2109% ( 706) 00:10:21.151 2.169 - 2.181: 67.8441% ( 215) 00:10:21.151 2.181 - 2.193: 72.3108% ( 588) 00:10:21.151 2.193 - 2.204: 80.6138% ( 1093) 00:10:21.151 2.204 - 2.216: 81.8824% ( 167) 00:10:21.151 2.216 - 2.228: 82.9915% ( 146) 00:10:21.151 2.228 - 2.240: 84.9134% ( 253) 00:10:21.151 2.240 - 2.252: 86.9113% ( 263) 00:10:21.151 2.252 - 2.264: 88.9395% ( 267) 00:10:21.151 2.264 - 2.276: 92.3124% ( 444) 00:10:21.151 2.276 - 2.287: 93.4974% ( 156) 00:10:21.151 2.287 - 2.299: 94.0292% ( 70) 00:10:21.151 2.299 - 2.311: 94.3482% ( 42) 00:10:21.151 2.311 - 2.323: 94.9559% ( 80) 00:10:21.151 2.323 - 2.335: 95.3206% ( 48) 00:10:21.151 2.335 - 2.347: 95.4573% ( 18) 00:10:21.151 2.347 - 2.359: 95.5864% ( 17) 00:10:21.151 2.359 - 2.370: 95.6700% ( 11) 00:10:21.151 2.370 - 2.382: 95.7764% ( 14) 00:10:21.151 2.382 - 2.394: 95.9207% ( 19) 00:10:21.151 2.394 - 2.406: 96.1714% ( 33) 00:10:21.151 2.406 - 2.418: 96.5132% ( 45) 00:10:21.151 2.418 - 2.430: 96.8323% ( 42) 00:10:21.151 2.430 - 2.441: 97.0905% ( 34) 00:10:21.151 2.441 - 2.453: 97.3412% ( 33) 00:10:21.151 2.453 - 2.465: 97.5539% ( 28) 00:10:21.151 2.465 - 2.477: 97.7135% ( 21) 00:10:21.151 2.477 - 2.489: 97.8730% ( 21) 00:10:21.151 2.489 - 2.501: 98.0173% ( 19) 00:10:21.151 2.501 - 2.513: 98.1161% ( 13) 00:10:21.151 2.513 - 2.524: 98.1844% ( 9) 00:10:21.151 2.524 - 2.536: 98.2528% ( 9) 00:10:21.151 2.536 - 2.548: 98.3668% ( 15) 00:10:21.151 2.548 - 2.560: 98.3971% ( 4) 00:10:21.151 2.560 - 2.572: 98.4351% ( 5) 00:10:21.151 2.572 - 2.584: 98.4655% ( 4) 00:10:21.151 2.596 - 2.607: 98.4883% ( 3) 00:10:21.151 2.607 - 2.619: 98.4959% ( 1) 00:10:21.151 2.631 - 2.643: 98.5111% ( 2) 00:10:21.151 2.643 - 2.655: 98.5187% ( 1) 00:10:21.151 2.655 - 2.667: 98.5339% ( 2) 00:10:21.151 2.667 - 2.679: 98.5415% ( 1) 00:10:21.151 2.726 - 2.738: 98.5491% ( 1) 00:10:21.151 2.750 - 2.761: 98.5567% ( 1) 00:10:21.151 2.773 - 2.785: 98.5643% ( 1) 00:10:21.151 3.413 - 3.437: 98.5719% ( 1) 00:10:21.151 3.437 - 3.461: 98.5795% ( 1) 00:10:21.151 3.461 - 3.484: 98.5947% ( 2) 00:10:21.151 3.484 - 3.508: 98.6022% ( 1) 00:10:21.151 3.508 - 3.532: 98.6098% ( 1) 00:10:21.151 3.579 - 3.603: 98.6174% ( 1) 00:10:21.151 3.627 - 3.650: 98.6326% ( 2) 00:10:21.408 3.674 - 3.698: 9[2024-07-16 01:03:37.143658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:21.409 8.6402% ( 1) 00:10:21.409 3.769 - 3.793: 98.6554% ( 2) 00:10:21.409 3.864 - 3.887: 98.6630% ( 1) 00:10:21.409 3.911 - 3.935: 98.6706% ( 1) 00:10:21.409 3.982 - 4.006: 98.6782% ( 1) 00:10:21.409 4.030 - 4.053: 98.6858% ( 1) 00:10:21.409 4.101 - 4.124: 98.7010% ( 2) 00:10:21.409 4.267 - 4.290: 98.7086% ( 1) 00:10:21.409 5.286 - 5.310: 98.7162% ( 1) 00:10:21.409 5.404 - 5.428: 98.7238% ( 1) 00:10:21.409 6.044 - 6.068: 98.7314% ( 1) 00:10:21.409 6.353 - 6.400: 98.7466% ( 2) 00:10:21.409 6.684 - 6.732: 98.7542% ( 1) 00:10:21.409 6.732 - 6.779: 98.7618% ( 1) 00:10:21.409 6.779 - 6.827: 98.7694% ( 1) 00:10:21.409 6.969 - 7.016: 98.7770% ( 1) 00:10:21.409 7.301 - 7.348: 98.7846% ( 1) 00:10:21.409 7.727 - 7.775: 98.7922% ( 1) 00:10:21.409 10.003 - 10.050: 98.7998% ( 1) 00:10:21.409 11.330 - 11.378: 98.8149% ( 2) 00:10:21.409 12.895 - 12.990: 98.8225% ( 1) 00:10:21.409 15.644 - 15.739: 98.8453% ( 3) 00:10:21.409 15.739 - 15.834: 98.8681% ( 3) 00:10:21.409 15.834 - 15.929: 98.8757% ( 1) 00:10:21.409 15.929 - 16.024: 98.8833% ( 1) 00:10:21.409 16.024 - 16.119: 98.9137% ( 4) 00:10:21.409 16.119 - 16.213: 98.9289% ( 2) 00:10:21.409 16.213 - 16.308: 98.9821% ( 7) 00:10:21.409 16.308 - 16.403: 99.0125% ( 4) 00:10:21.409 16.403 - 16.498: 99.0352% ( 3) 00:10:21.409 16.498 - 16.593: 99.0504% ( 2) 00:10:21.409 16.593 - 16.687: 99.0580% ( 1) 00:10:21.409 16.687 - 16.782: 99.0808% ( 3) 00:10:21.409 16.782 - 16.877: 99.1340% ( 7) 00:10:21.409 16.877 - 16.972: 99.1720% ( 5) 00:10:21.409 16.972 - 17.067: 99.2024% ( 4) 00:10:21.409 17.067 - 17.161: 99.2100% ( 1) 00:10:21.409 17.161 - 17.256: 99.2252% ( 2) 00:10:21.409 17.256 - 17.351: 99.2404% ( 2) 00:10:21.409 17.351 - 17.446: 99.2479% ( 1) 00:10:21.409 17.446 - 17.541: 99.2631% ( 2) 00:10:21.409 17.541 - 17.636: 99.2859% ( 3) 00:10:21.409 17.920 - 18.015: 99.2935% ( 1) 00:10:21.409 18.015 - 18.110: 99.3087% ( 2) 00:10:21.409 18.110 - 18.204: 99.3239% ( 2) 00:10:21.409 18.489 - 18.584: 99.3315% ( 1) 00:10:21.409 21.428 - 21.523: 99.3391% ( 1) 00:10:21.409 22.281 - 22.376: 99.3467% ( 1) 00:10:21.409 3009.801 - 3021.938: 99.3543% ( 1) 00:10:21.409 3021.938 - 3034.074: 99.3619% ( 1) 00:10:21.409 3046.210 - 3058.347: 99.3695% ( 1) 00:10:21.409 3980.705 - 4004.978: 99.7037% ( 44) 00:10:21.409 4004.978 - 4029.250: 99.9924% ( 38) 00:10:21.409 4975.881 - 5000.154: 100.0000% ( 1) 00:10:21.409 00:10:21.409 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:21.409 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:21.409 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:21.409 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:21.409 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:21.667 [ 00:10:21.667 { 00:10:21.667 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:21.667 "subtype": "Discovery", 00:10:21.667 "listen_addresses": [], 00:10:21.667 "allow_any_host": true, 00:10:21.667 "hosts": [] 00:10:21.667 }, 00:10:21.667 { 00:10:21.667 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:21.667 "subtype": "NVMe", 00:10:21.667 "listen_addresses": [ 00:10:21.667 { 00:10:21.667 "trtype": "VFIOUSER", 00:10:21.667 "adrfam": "IPv4", 00:10:21.667 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:21.667 "trsvcid": "0" 00:10:21.667 } 00:10:21.667 ], 00:10:21.667 "allow_any_host": true, 00:10:21.667 "hosts": [], 00:10:21.667 "serial_number": "SPDK1", 00:10:21.667 "model_number": "SPDK bdev Controller", 00:10:21.667 "max_namespaces": 32, 00:10:21.667 "min_cntlid": 1, 00:10:21.667 "max_cntlid": 65519, 00:10:21.667 "namespaces": [ 00:10:21.667 { 00:10:21.667 "nsid": 1, 00:10:21.667 "bdev_name": "Malloc1", 00:10:21.667 "name": "Malloc1", 00:10:21.667 "nguid": "7E0C3779AA834F1184159776B8CE81CE", 00:10:21.667 "uuid": "7e0c3779-aa83-4f11-8415-9776b8ce81ce" 00:10:21.667 } 00:10:21.667 ] 00:10:21.667 }, 00:10:21.667 { 00:10:21.667 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:21.667 "subtype": "NVMe", 00:10:21.667 "listen_addresses": [ 00:10:21.667 { 00:10:21.667 "trtype": "VFIOUSER", 00:10:21.667 "adrfam": "IPv4", 00:10:21.667 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:21.667 "trsvcid": "0" 00:10:21.667 } 00:10:21.667 ], 00:10:21.667 "allow_any_host": true, 00:10:21.667 "hosts": [], 00:10:21.667 "serial_number": "SPDK2", 00:10:21.667 "model_number": "SPDK bdev Controller", 00:10:21.667 "max_namespaces": 32, 00:10:21.667 "min_cntlid": 1, 00:10:21.667 "max_cntlid": 65519, 00:10:21.667 "namespaces": [ 00:10:21.667 { 00:10:21.667 "nsid": 1, 00:10:21.667 "bdev_name": "Malloc2", 00:10:21.667 "name": "Malloc2", 00:10:21.667 "nguid": "B0D80C2A99B5402D8CD6696489CD2B9C", 00:10:21.667 "uuid": "b0d80c2a-99b5-402d-8cd6-696489cd2b9c" 00:10:21.667 } 00:10:21.667 ] 00:10:21.667 } 00:10:21.667 ] 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4096509 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:21.667 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:21.667 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.667 [2024-07-16 01:03:37.606441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:21.925 Malloc3 00:10:21.925 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:22.183 [2024-07-16 01:03:37.964241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:22.183 01:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:22.183 Asynchronous Event Request test 00:10:22.183 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:22.183 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:22.183 Registering asynchronous event callbacks... 00:10:22.183 Starting namespace attribute notice tests for all controllers... 00:10:22.183 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:22.183 aer_cb - Changed Namespace 00:10:22.183 Cleaning up... 00:10:22.442 [ 00:10:22.442 { 00:10:22.442 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:22.442 "subtype": "Discovery", 00:10:22.442 "listen_addresses": [], 00:10:22.442 "allow_any_host": true, 00:10:22.442 "hosts": [] 00:10:22.442 }, 00:10:22.442 { 00:10:22.442 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:22.442 "subtype": "NVMe", 00:10:22.442 "listen_addresses": [ 00:10:22.442 { 00:10:22.442 "trtype": "VFIOUSER", 00:10:22.442 "adrfam": "IPv4", 00:10:22.442 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:22.442 "trsvcid": "0" 00:10:22.442 } 00:10:22.442 ], 00:10:22.442 "allow_any_host": true, 00:10:22.442 "hosts": [], 00:10:22.442 "serial_number": "SPDK1", 00:10:22.442 "model_number": "SPDK bdev Controller", 00:10:22.442 "max_namespaces": 32, 00:10:22.442 "min_cntlid": 1, 00:10:22.442 "max_cntlid": 65519, 00:10:22.442 "namespaces": [ 00:10:22.442 { 00:10:22.442 "nsid": 1, 00:10:22.442 "bdev_name": "Malloc1", 00:10:22.442 "name": "Malloc1", 00:10:22.442 "nguid": "7E0C3779AA834F1184159776B8CE81CE", 00:10:22.442 "uuid": "7e0c3779-aa83-4f11-8415-9776b8ce81ce" 00:10:22.442 }, 00:10:22.442 { 00:10:22.442 "nsid": 2, 00:10:22.442 "bdev_name": "Malloc3", 00:10:22.442 "name": "Malloc3", 00:10:22.442 "nguid": "21C718677DC445CBAF6A1BDEBC15669C", 00:10:22.442 "uuid": "21c71867-7dc4-45cb-af6a-1bdebc15669c" 00:10:22.442 } 00:10:22.442 ] 00:10:22.442 }, 00:10:22.442 { 00:10:22.442 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:22.442 "subtype": "NVMe", 00:10:22.442 "listen_addresses": [ 00:10:22.442 { 00:10:22.442 "trtype": "VFIOUSER", 00:10:22.442 "adrfam": "IPv4", 00:10:22.442 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:22.442 "trsvcid": "0" 00:10:22.442 } 00:10:22.442 ], 00:10:22.442 "allow_any_host": true, 00:10:22.442 "hosts": [], 00:10:22.442 "serial_number": "SPDK2", 00:10:22.442 "model_number": "SPDK bdev Controller", 00:10:22.442 "max_namespaces": 32, 00:10:22.442 "min_cntlid": 1, 00:10:22.442 "max_cntlid": 65519, 00:10:22.442 "namespaces": [ 00:10:22.442 { 00:10:22.442 "nsid": 1, 00:10:22.442 "bdev_name": "Malloc2", 00:10:22.442 "name": "Malloc2", 00:10:22.442 "nguid": "B0D80C2A99B5402D8CD6696489CD2B9C", 00:10:22.442 "uuid": "b0d80c2a-99b5-402d-8cd6-696489cd2b9c" 00:10:22.442 } 00:10:22.442 ] 00:10:22.442 } 00:10:22.442 ] 00:10:22.442 01:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4096509 00:10:22.442 01:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:22.442 01:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:22.442 01:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:22.442 01:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:22.442 [2024-07-16 01:03:38.248052] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:10:22.442 [2024-07-16 01:03:38.248095] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096596 ] 00:10:22.442 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.442 [2024-07-16 01:03:38.283119] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:22.442 [2024-07-16 01:03:38.291024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:22.442 [2024-07-16 01:03:38.291053] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f96b9c82000 00:10:22.442 [2024-07-16 01:03:38.292029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:22.442 [2024-07-16 01:03:38.293044] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:22.442 [2024-07-16 01:03:38.294052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:22.442 [2024-07-16 01:03:38.295066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:22.443 [2024-07-16 01:03:38.296072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:22.443 [2024-07-16 01:03:38.297076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:22.443 [2024-07-16 01:03:38.298083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:22.443 [2024-07-16 01:03:38.299090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:22.443 [2024-07-16 01:03:38.300096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:22.443 [2024-07-16 01:03:38.300118] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f96b9c77000 00:10:22.443 [2024-07-16 01:03:38.301273] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:22.443 [2024-07-16 01:03:38.318229] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:22.443 [2024-07-16 01:03:38.318280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:22.443 [2024-07-16 01:03:38.320379] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:22.443 [2024-07-16 01:03:38.320431] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:22.443 [2024-07-16 01:03:38.320522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:22.443 [2024-07-16 01:03:38.320545] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:22.443 [2024-07-16 01:03:38.320555] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:22.443 [2024-07-16 01:03:38.321389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:22.443 [2024-07-16 01:03:38.321409] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:22.443 [2024-07-16 01:03:38.321421] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:22.443 [2024-07-16 01:03:38.322398] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:22.443 [2024-07-16 01:03:38.322418] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:22.443 [2024-07-16 01:03:38.322432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:22.443 [2024-07-16 01:03:38.323402] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:22.443 [2024-07-16 01:03:38.323423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:22.443 [2024-07-16 01:03:38.324407] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:22.443 [2024-07-16 01:03:38.324428] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:22.443 [2024-07-16 01:03:38.324437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:22.443 [2024-07-16 01:03:38.324449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:22.443 [2024-07-16 01:03:38.324558] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:22.443 [2024-07-16 01:03:38.324566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:22.443 [2024-07-16 01:03:38.324574] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:22.443 [2024-07-16 01:03:38.325419] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:22.443 [2024-07-16 01:03:38.326425] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:22.443 [2024-07-16 01:03:38.327428] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:22.443 [2024-07-16 01:03:38.328427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:22.443 [2024-07-16 01:03:38.328505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:22.443 [2024-07-16 01:03:38.329441] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:22.443 [2024-07-16 01:03:38.329460] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:22.443 [2024-07-16 01:03:38.329469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.329492] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:22.443 [2024-07-16 01:03:38.329509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.329530] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:22.443 [2024-07-16 01:03:38.329539] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:22.443 [2024-07-16 01:03:38.329558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:22.443 [2024-07-16 01:03:38.339971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:22.443 [2024-07-16 01:03:38.339996] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:22.443 [2024-07-16 01:03:38.340005] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:22.443 [2024-07-16 01:03:38.340013] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:22.443 [2024-07-16 01:03:38.340021] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:22.443 [2024-07-16 01:03:38.340034] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:22.443 [2024-07-16 01:03:38.340043] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:22.443 [2024-07-16 01:03:38.340051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.340065] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.340085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:22.443 [2024-07-16 01:03:38.347969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:22.443 [2024-07-16 01:03:38.347994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.443 [2024-07-16 01:03:38.348007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.443 [2024-07-16 01:03:38.348020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.443 [2024-07-16 01:03:38.348032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.443 [2024-07-16 01:03:38.348042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.348058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.348074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:22.443 [2024-07-16 01:03:38.355966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:22.443 [2024-07-16 01:03:38.355984] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:22.443 [2024-07-16 01:03:38.355994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.356010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.356021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.356035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:22.443 [2024-07-16 01:03:38.363978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:22.443 [2024-07-16 01:03:38.364053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.364071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.364085] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:22.443 [2024-07-16 01:03:38.364093] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:22.443 [2024-07-16 01:03:38.364103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:22.443 [2024-07-16 01:03:38.371968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:22.443 [2024-07-16 01:03:38.371992] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:22.443 [2024-07-16 01:03:38.372012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.372027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.372040] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:22.443 [2024-07-16 01:03:38.372048] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:22.443 [2024-07-16 01:03:38.372058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:22.443 [2024-07-16 01:03:38.379965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:22.443 [2024-07-16 01:03:38.379994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.380010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:22.443 [2024-07-16 01:03:38.380024] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:22.443 [2024-07-16 01:03:38.380032] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:22.443 [2024-07-16 01:03:38.380042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.387969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.387991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:22.444 [2024-07-16 01:03:38.388003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:22.444 [2024-07-16 01:03:38.388018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:22.444 [2024-07-16 01:03:38.388029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:22.444 [2024-07-16 01:03:38.388038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:22.444 [2024-07-16 01:03:38.388046] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:22.444 [2024-07-16 01:03:38.388055] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:22.444 [2024-07-16 01:03:38.388063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:22.444 [2024-07-16 01:03:38.388071] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:22.444 [2024-07-16 01:03:38.388097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.395968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.395999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.403969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.403994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.411968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.411994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.419969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.420001] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:22.444 [2024-07-16 01:03:38.420012] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:22.444 [2024-07-16 01:03:38.420019] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:22.444 [2024-07-16 01:03:38.420025] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:22.444 [2024-07-16 01:03:38.420035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:22.444 [2024-07-16 01:03:38.420047] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:22.444 [2024-07-16 01:03:38.420056] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:22.444 [2024-07-16 01:03:38.420065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.420076] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:22.444 [2024-07-16 01:03:38.420084] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:22.444 [2024-07-16 01:03:38.420093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.420105] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:22.444 [2024-07-16 01:03:38.420114] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:22.444 [2024-07-16 01:03:38.420123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:22.444 [2024-07-16 01:03:38.427970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.427997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.428015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:22.444 [2024-07-16 01:03:38.428027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:22.444 ===================================================== 00:10:22.444 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:22.444 ===================================================== 00:10:22.444 Controller Capabilities/Features 00:10:22.444 ================================ 00:10:22.444 Vendor ID: 4e58 00:10:22.444 Subsystem Vendor ID: 4e58 00:10:22.444 Serial Number: SPDK2 00:10:22.444 Model Number: SPDK bdev Controller 00:10:22.444 Firmware Version: 24.09 00:10:22.444 Recommended Arb Burst: 6 00:10:22.444 IEEE OUI Identifier: 8d 6b 50 00:10:22.444 Multi-path I/O 00:10:22.444 May have multiple subsystem ports: Yes 00:10:22.444 May have multiple controllers: Yes 00:10:22.444 Associated with SR-IOV VF: No 00:10:22.444 Max Data Transfer Size: 131072 00:10:22.444 Max Number of Namespaces: 32 00:10:22.444 Max Number of I/O Queues: 127 00:10:22.444 NVMe Specification Version (VS): 1.3 00:10:22.444 NVMe Specification Version (Identify): 1.3 00:10:22.444 Maximum Queue Entries: 256 00:10:22.444 Contiguous Queues Required: Yes 00:10:22.444 Arbitration Mechanisms Supported 00:10:22.444 Weighted Round Robin: Not Supported 00:10:22.444 Vendor Specific: Not Supported 00:10:22.444 Reset Timeout: 15000 ms 00:10:22.444 Doorbell Stride: 4 bytes 00:10:22.444 NVM Subsystem Reset: Not Supported 00:10:22.444 Command Sets Supported 00:10:22.444 NVM Command Set: Supported 00:10:22.444 Boot Partition: Not Supported 00:10:22.444 Memory Page Size Minimum: 4096 bytes 00:10:22.444 Memory Page Size Maximum: 4096 bytes 00:10:22.444 Persistent Memory Region: Not Supported 00:10:22.444 Optional Asynchronous Events Supported 00:10:22.444 Namespace Attribute Notices: Supported 00:10:22.444 Firmware Activation Notices: Not Supported 00:10:22.444 ANA Change Notices: Not Supported 00:10:22.444 PLE Aggregate Log Change Notices: Not Supported 00:10:22.444 LBA Status Info Alert Notices: Not Supported 00:10:22.444 EGE Aggregate Log Change Notices: Not Supported 00:10:22.444 Normal NVM Subsystem Shutdown event: Not Supported 00:10:22.444 Zone Descriptor Change Notices: Not Supported 00:10:22.444 Discovery Log Change Notices: Not Supported 00:10:22.444 Controller Attributes 00:10:22.444 128-bit Host Identifier: Supported 00:10:22.444 Non-Operational Permissive Mode: Not Supported 00:10:22.444 NVM Sets: Not Supported 00:10:22.444 Read Recovery Levels: Not Supported 00:10:22.444 Endurance Groups: Not Supported 00:10:22.444 Predictable Latency Mode: Not Supported 00:10:22.444 Traffic Based Keep ALive: Not Supported 00:10:22.444 Namespace Granularity: Not Supported 00:10:22.444 SQ Associations: Not Supported 00:10:22.444 UUID List: Not Supported 00:10:22.444 Multi-Domain Subsystem: Not Supported 00:10:22.444 Fixed Capacity Management: Not Supported 00:10:22.444 Variable Capacity Management: Not Supported 00:10:22.444 Delete Endurance Group: Not Supported 00:10:22.444 Delete NVM Set: Not Supported 00:10:22.444 Extended LBA Formats Supported: Not Supported 00:10:22.444 Flexible Data Placement Supported: Not Supported 00:10:22.444 00:10:22.444 Controller Memory Buffer Support 00:10:22.444 ================================ 00:10:22.444 Supported: No 00:10:22.444 00:10:22.444 Persistent Memory Region Support 00:10:22.444 ================================ 00:10:22.444 Supported: No 00:10:22.444 00:10:22.444 Admin Command Set Attributes 00:10:22.444 ============================ 00:10:22.444 Security Send/Receive: Not Supported 00:10:22.444 Format NVM: Not Supported 00:10:22.444 Firmware Activate/Download: Not Supported 00:10:22.444 Namespace Management: Not Supported 00:10:22.444 Device Self-Test: Not Supported 00:10:22.444 Directives: Not Supported 00:10:22.444 NVMe-MI: Not Supported 00:10:22.444 Virtualization Management: Not Supported 00:10:22.444 Doorbell Buffer Config: Not Supported 00:10:22.444 Get LBA Status Capability: Not Supported 00:10:22.444 Command & Feature Lockdown Capability: Not Supported 00:10:22.444 Abort Command Limit: 4 00:10:22.444 Async Event Request Limit: 4 00:10:22.444 Number of Firmware Slots: N/A 00:10:22.444 Firmware Slot 1 Read-Only: N/A 00:10:22.444 Firmware Activation Without Reset: N/A 00:10:22.444 Multiple Update Detection Support: N/A 00:10:22.444 Firmware Update Granularity: No Information Provided 00:10:22.444 Per-Namespace SMART Log: No 00:10:22.444 Asymmetric Namespace Access Log Page: Not Supported 00:10:22.444 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:22.444 Command Effects Log Page: Supported 00:10:22.444 Get Log Page Extended Data: Supported 00:10:22.444 Telemetry Log Pages: Not Supported 00:10:22.444 Persistent Event Log Pages: Not Supported 00:10:22.444 Supported Log Pages Log Page: May Support 00:10:22.444 Commands Supported & Effects Log Page: Not Supported 00:10:22.444 Feature Identifiers & Effects Log Page:May Support 00:10:22.444 NVMe-MI Commands & Effects Log Page: May Support 00:10:22.444 Data Area 4 for Telemetry Log: Not Supported 00:10:22.444 Error Log Page Entries Supported: 128 00:10:22.444 Keep Alive: Supported 00:10:22.444 Keep Alive Granularity: 10000 ms 00:10:22.444 00:10:22.444 NVM Command Set Attributes 00:10:22.444 ========================== 00:10:22.444 Submission Queue Entry Size 00:10:22.444 Max: 64 00:10:22.444 Min: 64 00:10:22.444 Completion Queue Entry Size 00:10:22.444 Max: 16 00:10:22.444 Min: 16 00:10:22.444 Number of Namespaces: 32 00:10:22.444 Compare Command: Supported 00:10:22.444 Write Uncorrectable Command: Not Supported 00:10:22.444 Dataset Management Command: Supported 00:10:22.444 Write Zeroes Command: Supported 00:10:22.445 Set Features Save Field: Not Supported 00:10:22.445 Reservations: Not Supported 00:10:22.445 Timestamp: Not Supported 00:10:22.445 Copy: Supported 00:10:22.445 Volatile Write Cache: Present 00:10:22.445 Atomic Write Unit (Normal): 1 00:10:22.445 Atomic Write Unit (PFail): 1 00:10:22.445 Atomic Compare & Write Unit: 1 00:10:22.445 Fused Compare & Write: Supported 00:10:22.445 Scatter-Gather List 00:10:22.445 SGL Command Set: Supported (Dword aligned) 00:10:22.445 SGL Keyed: Not Supported 00:10:22.445 SGL Bit Bucket Descriptor: Not Supported 00:10:22.445 SGL Metadata Pointer: Not Supported 00:10:22.445 Oversized SGL: Not Supported 00:10:22.445 SGL Metadata Address: Not Supported 00:10:22.445 SGL Offset: Not Supported 00:10:22.445 Transport SGL Data Block: Not Supported 00:10:22.445 Replay Protected Memory Block: Not Supported 00:10:22.445 00:10:22.445 Firmware Slot Information 00:10:22.445 ========================= 00:10:22.445 Active slot: 1 00:10:22.445 Slot 1 Firmware Revision: 24.09 00:10:22.445 00:10:22.445 00:10:22.445 Commands Supported and Effects 00:10:22.445 ============================== 00:10:22.445 Admin Commands 00:10:22.445 -------------- 00:10:22.445 Get Log Page (02h): Supported 00:10:22.445 Identify (06h): Supported 00:10:22.445 Abort (08h): Supported 00:10:22.445 Set Features (09h): Supported 00:10:22.445 Get Features (0Ah): Supported 00:10:22.445 Asynchronous Event Request (0Ch): Supported 00:10:22.445 Keep Alive (18h): Supported 00:10:22.445 I/O Commands 00:10:22.445 ------------ 00:10:22.445 Flush (00h): Supported LBA-Change 00:10:22.445 Write (01h): Supported LBA-Change 00:10:22.445 Read (02h): Supported 00:10:22.445 Compare (05h): Supported 00:10:22.445 Write Zeroes (08h): Supported LBA-Change 00:10:22.445 Dataset Management (09h): Supported LBA-Change 00:10:22.445 Copy (19h): Supported LBA-Change 00:10:22.445 00:10:22.445 Error Log 00:10:22.445 ========= 00:10:22.445 00:10:22.445 Arbitration 00:10:22.445 =========== 00:10:22.445 Arbitration Burst: 1 00:10:22.445 00:10:22.445 Power Management 00:10:22.445 ================ 00:10:22.445 Number of Power States: 1 00:10:22.445 Current Power State: Power State #0 00:10:22.445 Power State #0: 00:10:22.445 Max Power: 0.00 W 00:10:22.445 Non-Operational State: Operational 00:10:22.445 Entry Latency: Not Reported 00:10:22.445 Exit Latency: Not Reported 00:10:22.445 Relative Read Throughput: 0 00:10:22.445 Relative Read Latency: 0 00:10:22.445 Relative Write Throughput: 0 00:10:22.445 Relative Write Latency: 0 00:10:22.445 Idle Power: Not Reported 00:10:22.445 Active Power: Not Reported 00:10:22.445 Non-Operational Permissive Mode: Not Supported 00:10:22.445 00:10:22.445 Health Information 00:10:22.445 ================== 00:10:22.445 Critical Warnings: 00:10:22.445 Available Spare Space: OK 00:10:22.445 Temperature: OK 00:10:22.445 Device Reliability: OK 00:10:22.445 Read Only: No 00:10:22.445 Volatile Memory Backup: OK 00:10:22.445 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:22.445 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:22.445 Available Spare: 0% 00:10:22.445 Available Sp[2024-07-16 01:03:38.428148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:22.707 [2024-07-16 01:03:38.435966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:22.707 [2024-07-16 01:03:38.436018] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:22.707 [2024-07-16 01:03:38.436037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.707 [2024-07-16 01:03:38.436052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.707 [2024-07-16 01:03:38.436063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.707 [2024-07-16 01:03:38.436074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.707 [2024-07-16 01:03:38.436159] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:22.707 [2024-07-16 01:03:38.436181] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:22.707 [2024-07-16 01:03:38.437163] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:22.707 [2024-07-16 01:03:38.437257] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:22.707 [2024-07-16 01:03:38.437288] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:22.707 [2024-07-16 01:03:38.438177] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:22.707 [2024-07-16 01:03:38.438202] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:22.707 [2024-07-16 01:03:38.438270] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:22.707 [2024-07-16 01:03:38.439484] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:22.707 are Threshold: 0% 00:10:22.707 Life Percentage Used: 0% 00:10:22.707 Data Units Read: 0 00:10:22.707 Data Units Written: 0 00:10:22.707 Host Read Commands: 0 00:10:22.707 Host Write Commands: 0 00:10:22.707 Controller Busy Time: 0 minutes 00:10:22.707 Power Cycles: 0 00:10:22.707 Power On Hours: 0 hours 00:10:22.707 Unsafe Shutdowns: 0 00:10:22.707 Unrecoverable Media Errors: 0 00:10:22.707 Lifetime Error Log Entries: 0 00:10:22.707 Warning Temperature Time: 0 minutes 00:10:22.707 Critical Temperature Time: 0 minutes 00:10:22.707 00:10:22.707 Number of Queues 00:10:22.707 ================ 00:10:22.707 Number of I/O Submission Queues: 127 00:10:22.707 Number of I/O Completion Queues: 127 00:10:22.707 00:10:22.707 Active Namespaces 00:10:22.707 ================= 00:10:22.707 Namespace ID:1 00:10:22.707 Error Recovery Timeout: Unlimited 00:10:22.707 Command Set Identifier: NVM (00h) 00:10:22.707 Deallocate: Supported 00:10:22.707 Deallocated/Unwritten Error: Not Supported 00:10:22.707 Deallocated Read Value: Unknown 00:10:22.707 Deallocate in Write Zeroes: Not Supported 00:10:22.707 Deallocated Guard Field: 0xFFFF 00:10:22.707 Flush: Supported 00:10:22.707 Reservation: Supported 00:10:22.707 Namespace Sharing Capabilities: Multiple Controllers 00:10:22.707 Size (in LBAs): 131072 (0GiB) 00:10:22.708 Capacity (in LBAs): 131072 (0GiB) 00:10:22.708 Utilization (in LBAs): 131072 (0GiB) 00:10:22.708 NGUID: B0D80C2A99B5402D8CD6696489CD2B9C 00:10:22.708 UUID: b0d80c2a-99b5-402d-8cd6-696489cd2b9c 00:10:22.708 Thin Provisioning: Not Supported 00:10:22.708 Per-NS Atomic Units: Yes 00:10:22.708 Atomic Boundary Size (Normal): 0 00:10:22.708 Atomic Boundary Size (PFail): 0 00:10:22.708 Atomic Boundary Offset: 0 00:10:22.708 Maximum Single Source Range Length: 65535 00:10:22.708 Maximum Copy Length: 65535 00:10:22.708 Maximum Source Range Count: 1 00:10:22.708 NGUID/EUI64 Never Reused: No 00:10:22.708 Namespace Write Protected: No 00:10:22.708 Number of LBA Formats: 1 00:10:22.708 Current LBA Format: LBA Format #00 00:10:22.708 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:22.708 00:10:22.708 01:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:22.708 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.708 [2024-07-16 01:03:38.676783] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:27.968 Initializing NVMe Controllers 00:10:27.968 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:27.968 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:27.968 Initialization complete. Launching workers. 00:10:27.968 ======================================================== 00:10:27.968 Latency(us) 00:10:27.968 Device Information : IOPS MiB/s Average min max 00:10:27.968 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34352.74 134.19 3725.30 1147.82 10641.30 00:10:27.968 ======================================================== 00:10:27.968 Total : 34352.74 134.19 3725.30 1147.82 10641.30 00:10:27.968 00:10:27.968 [2024-07-16 01:03:43.778294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:27.968 01:03:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:27.968 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.225 [2024-07-16 01:03:44.021999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:33.483 Initializing NVMe Controllers 00:10:33.483 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:33.483 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:33.483 Initialization complete. Launching workers. 00:10:33.483 ======================================================== 00:10:33.483 Latency(us) 00:10:33.483 Device Information : IOPS MiB/s Average min max 00:10:33.483 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32397.50 126.55 3950.16 1198.70 7378.35 00:10:33.483 ======================================================== 00:10:33.483 Total : 32397.50 126.55 3950.16 1198.70 7378.35 00:10:33.483 00:10:33.483 [2024-07-16 01:03:49.043144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:33.483 01:03:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:33.483 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.483 [2024-07-16 01:03:49.252958] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:38.748 [2024-07-16 01:03:54.395088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:38.748 Initializing NVMe Controllers 00:10:38.748 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:38.748 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:38.748 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:38.748 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:38.748 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:38.748 Initialization complete. Launching workers. 00:10:38.748 Starting thread on core 2 00:10:38.748 Starting thread on core 3 00:10:38.748 Starting thread on core 1 00:10:38.748 01:03:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:38.748 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.748 [2024-07-16 01:03:54.701466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:42.028 [2024-07-16 01:03:57.774017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:42.028 Initializing NVMe Controllers 00:10:42.028 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:42.028 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:42.028 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:42.028 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:42.028 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:42.028 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:42.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:42.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:42.028 Initialization complete. Launching workers. 00:10:42.028 Starting thread on core 1 with urgent priority queue 00:10:42.028 Starting thread on core 2 with urgent priority queue 00:10:42.028 Starting thread on core 3 with urgent priority queue 00:10:42.028 Starting thread on core 0 with urgent priority queue 00:10:42.028 SPDK bdev Controller (SPDK2 ) core 0: 3587.33 IO/s 27.88 secs/100000 ios 00:10:42.028 SPDK bdev Controller (SPDK2 ) core 1: 4438.67 IO/s 22.53 secs/100000 ios 00:10:42.028 SPDK bdev Controller (SPDK2 ) core 2: 3750.33 IO/s 26.66 secs/100000 ios 00:10:42.028 SPDK bdev Controller (SPDK2 ) core 3: 4773.67 IO/s 20.95 secs/100000 ios 00:10:42.028 ======================================================== 00:10:42.028 00:10:42.028 01:03:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:42.028 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.284 [2024-07-16 01:03:58.075494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:42.284 Initializing NVMe Controllers 00:10:42.284 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:42.284 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:42.284 Namespace ID: 1 size: 0GB 00:10:42.284 Initialization complete. 00:10:42.284 INFO: using host memory buffer for IO 00:10:42.284 Hello world! 00:10:42.284 [2024-07-16 01:03:58.085561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:42.284 01:03:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:42.284 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.539 [2024-07-16 01:03:58.384327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:43.907 Initializing NVMe Controllers 00:10:43.907 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:43.907 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:43.907 Initialization complete. Launching workers. 00:10:43.907 submit (in ns) avg, min, max = 7589.5, 3505.6, 4025447.8 00:10:43.907 complete (in ns) avg, min, max = 25748.1, 2055.6, 4016385.6 00:10:43.907 00:10:43.907 Submit histogram 00:10:43.907 ================ 00:10:43.907 Range in us Cumulative Count 00:10:43.907 3.484 - 3.508: 0.0382% ( 5) 00:10:43.907 3.508 - 3.532: 1.0536% ( 133) 00:10:43.907 3.532 - 3.556: 3.1150% ( 270) 00:10:43.907 3.556 - 3.579: 7.8256% ( 617) 00:10:43.907 3.579 - 3.603: 14.8725% ( 923) 00:10:43.907 3.603 - 3.627: 22.3775% ( 983) 00:10:43.907 3.627 - 3.650: 30.9971% ( 1129) 00:10:43.907 3.650 - 3.674: 38.8304% ( 1026) 00:10:43.907 3.674 - 3.698: 45.9078% ( 927) 00:10:43.907 3.698 - 3.721: 52.6264% ( 880) 00:10:43.907 3.721 - 3.745: 56.6728% ( 530) 00:10:43.907 3.745 - 3.769: 60.5741% ( 511) 00:10:43.907 3.769 - 3.793: 63.9716% ( 445) 00:10:43.907 3.793 - 3.816: 67.6439% ( 481) 00:10:43.907 3.816 - 3.840: 71.4537% ( 499) 00:10:43.907 3.840 - 3.864: 75.6604% ( 551) 00:10:43.907 3.864 - 3.887: 79.2945% ( 476) 00:10:43.907 3.887 - 3.911: 82.6844% ( 444) 00:10:43.907 3.911 - 3.935: 85.5092% ( 370) 00:10:43.907 3.935 - 3.959: 87.5706% ( 270) 00:10:43.907 3.959 - 3.982: 89.2808% ( 224) 00:10:43.907 3.982 - 4.006: 90.8230% ( 202) 00:10:43.907 4.006 - 4.030: 92.0751% ( 164) 00:10:43.907 4.030 - 4.053: 93.2127% ( 149) 00:10:43.907 4.053 - 4.077: 94.1518% ( 123) 00:10:43.907 4.077 - 4.101: 94.9229% ( 101) 00:10:43.907 4.101 - 4.124: 95.5795% ( 86) 00:10:43.907 4.124 - 4.148: 96.0452% ( 61) 00:10:43.907 4.148 - 4.172: 96.2971% ( 33) 00:10:43.907 4.172 - 4.196: 96.5567% ( 34) 00:10:43.907 4.196 - 4.219: 96.7552% ( 26) 00:10:43.907 4.219 - 4.243: 96.9385% ( 24) 00:10:43.907 4.243 - 4.267: 97.0759% ( 18) 00:10:43.907 4.267 - 4.290: 97.1675% ( 12) 00:10:43.907 4.290 - 4.314: 97.2362% ( 9) 00:10:43.907 4.314 - 4.338: 97.2897% ( 7) 00:10:43.907 4.338 - 4.361: 97.3507% ( 8) 00:10:43.907 4.361 - 4.385: 97.4042% ( 7) 00:10:43.907 4.385 - 4.409: 97.4500% ( 6) 00:10:43.907 4.409 - 4.433: 97.4805% ( 4) 00:10:43.907 4.433 - 4.456: 97.4882% ( 1) 00:10:43.907 4.456 - 4.480: 97.5111% ( 3) 00:10:43.907 4.504 - 4.527: 97.5187% ( 1) 00:10:43.907 4.527 - 4.551: 97.5492% ( 4) 00:10:43.907 4.551 - 4.575: 97.5645% ( 2) 00:10:43.907 4.575 - 4.599: 97.5721% ( 1) 00:10:43.907 4.599 - 4.622: 97.5798% ( 1) 00:10:43.907 4.670 - 4.693: 97.5951% ( 2) 00:10:43.907 4.717 - 4.741: 97.6180% ( 3) 00:10:43.907 4.741 - 4.764: 97.6256% ( 1) 00:10:43.907 4.764 - 4.788: 97.6561% ( 4) 00:10:43.907 4.788 - 4.812: 97.7325% ( 10) 00:10:43.907 4.812 - 4.836: 97.7554% ( 3) 00:10:43.907 4.836 - 4.859: 97.8165% ( 8) 00:10:43.907 4.859 - 4.883: 97.8623% ( 6) 00:10:43.907 4.883 - 4.907: 97.9157% ( 7) 00:10:43.907 4.907 - 4.930: 97.9692% ( 7) 00:10:43.907 4.930 - 4.954: 98.0150% ( 6) 00:10:43.907 4.954 - 4.978: 98.0379% ( 3) 00:10:43.907 4.978 - 5.001: 98.0760% ( 5) 00:10:43.907 5.001 - 5.025: 98.1066% ( 4) 00:10:43.907 5.025 - 5.049: 98.1219% ( 2) 00:10:43.907 5.049 - 5.073: 98.1448% ( 3) 00:10:43.907 5.073 - 5.096: 98.1677% ( 3) 00:10:43.907 5.096 - 5.120: 98.1982% ( 4) 00:10:43.907 5.120 - 5.144: 98.2440% ( 6) 00:10:43.907 5.144 - 5.167: 98.2593% ( 2) 00:10:43.907 5.167 - 5.191: 98.2745% ( 2) 00:10:43.907 5.191 - 5.215: 98.2822% ( 1) 00:10:43.907 5.215 - 5.239: 98.2898% ( 1) 00:10:43.907 5.239 - 5.262: 98.3204% ( 4) 00:10:43.907 5.262 - 5.286: 98.3433% ( 3) 00:10:43.907 5.286 - 5.310: 98.3509% ( 1) 00:10:43.907 5.310 - 5.333: 98.3585% ( 1) 00:10:43.907 5.333 - 5.357: 98.3662% ( 1) 00:10:43.907 5.357 - 5.381: 98.3891% ( 3) 00:10:43.907 5.381 - 5.404: 98.4043% ( 2) 00:10:43.907 5.404 - 5.428: 98.4120% ( 1) 00:10:43.907 5.452 - 5.476: 98.4272% ( 2) 00:10:43.907 5.476 - 5.499: 98.4349% ( 1) 00:10:43.907 5.523 - 5.547: 98.4425% ( 1) 00:10:43.907 5.547 - 5.570: 98.4501% ( 1) 00:10:43.907 5.641 - 5.665: 98.4578% ( 1) 00:10:43.907 5.760 - 5.784: 98.4654% ( 1) 00:10:43.907 5.926 - 5.950: 98.4730% ( 1) 00:10:43.907 5.950 - 5.973: 98.4807% ( 1) 00:10:43.907 6.044 - 6.068: 98.4883% ( 1) 00:10:43.907 6.210 - 6.258: 98.4960% ( 1) 00:10:43.907 6.353 - 6.400: 98.5036% ( 1) 00:10:43.907 6.542 - 6.590: 98.5112% ( 1) 00:10:43.907 6.921 - 6.969: 98.5189% ( 1) 00:10:43.907 6.969 - 7.016: 98.5265% ( 1) 00:10:43.907 7.064 - 7.111: 98.5418% ( 2) 00:10:43.907 7.111 - 7.159: 98.5494% ( 1) 00:10:43.907 7.253 - 7.301: 98.5647% ( 2) 00:10:43.907 7.348 - 7.396: 98.5723% ( 1) 00:10:43.907 7.443 - 7.490: 98.5799% ( 1) 00:10:43.907 7.538 - 7.585: 98.5952% ( 2) 00:10:43.907 7.727 - 7.775: 98.6028% ( 1) 00:10:43.907 7.775 - 7.822: 98.6105% ( 1) 00:10:43.907 7.964 - 8.012: 98.6181% ( 1) 00:10:43.907 8.012 - 8.059: 98.6257% ( 1) 00:10:43.907 8.107 - 8.154: 98.6334% ( 1) 00:10:43.907 8.201 - 8.249: 98.6410% ( 1) 00:10:43.907 8.296 - 8.344: 98.6716% ( 4) 00:10:43.907 8.486 - 8.533: 98.6792% ( 1) 00:10:43.907 8.676 - 8.723: 98.6945% ( 2) 00:10:43.907 8.723 - 8.770: 98.7021% ( 1) 00:10:43.907 8.818 - 8.865: 98.7250% ( 3) 00:10:43.907 8.865 - 8.913: 98.7326% ( 1) 00:10:43.907 9.007 - 9.055: 98.7403% ( 1) 00:10:43.907 9.055 - 9.102: 98.7479% ( 1) 00:10:43.907 9.102 - 9.150: 98.7555% ( 1) 00:10:43.907 9.150 - 9.197: 98.7708% ( 2) 00:10:43.907 9.197 - 9.244: 98.7784% ( 1) 00:10:43.907 9.292 - 9.339: 98.8013% ( 3) 00:10:43.907 9.387 - 9.434: 98.8166% ( 2) 00:10:43.907 9.624 - 9.671: 98.8395% ( 3) 00:10:43.907 9.671 - 9.719: 98.8472% ( 1) 00:10:43.907 9.719 - 9.766: 98.8548% ( 1) 00:10:43.907 9.766 - 9.813: 98.8701% ( 2) 00:10:43.907 9.813 - 9.861: 98.8777% ( 1) 00:10:43.907 9.861 - 9.908: 98.8930% ( 2) 00:10:43.907 10.050 - 10.098: 98.9082% ( 2) 00:10:43.907 10.240 - 10.287: 98.9159% ( 1) 00:10:43.907 10.809 - 10.856: 98.9311% ( 2) 00:10:43.907 10.856 - 10.904: 98.9388% ( 1) 00:10:43.907 10.951 - 10.999: 98.9464% ( 1) 00:10:43.907 10.999 - 11.046: 98.9617% ( 2) 00:10:43.907 11.093 - 11.141: 98.9693% ( 1) 00:10:43.907 11.236 - 11.283: 98.9846% ( 2) 00:10:43.907 11.378 - 11.425: 98.9922% ( 1) 00:10:43.907 11.520 - 11.567: 99.0075% ( 2) 00:10:43.907 11.567 - 11.615: 99.0151% ( 1) 00:10:43.907 11.615 - 11.662: 99.0228% ( 1) 00:10:43.907 11.804 - 11.852: 99.0304% ( 1) 00:10:43.907 12.421 - 12.516: 99.0380% ( 1) 00:10:43.907 13.084 - 13.179: 99.0457% ( 1) 00:10:43.907 13.274 - 13.369: 99.0533% ( 1) 00:10:43.907 13.369 - 13.464: 99.0609% ( 1) 00:10:43.907 13.653 - 13.748: 99.0686% ( 1) 00:10:43.907 13.748 - 13.843: 99.0838% ( 2) 00:10:43.907 14.033 - 14.127: 99.0991% ( 2) 00:10:43.907 14.791 - 14.886: 99.1144% ( 2) 00:10:43.907 15.455 - 15.550: 99.1220% ( 1) 00:10:43.907 15.834 - 15.929: 99.1296% ( 1) 00:10:43.907 17.256 - 17.351: 99.1525% ( 3) 00:10:43.907 17.351 - 17.446: 99.1754% ( 3) 00:10:43.907 17.446 - 17.541: 99.1831% ( 1) 00:10:43.907 17.541 - 17.636: 99.1907% ( 1) 00:10:43.907 17.636 - 17.730: 99.2289% ( 5) 00:10:43.907 17.730 - 17.825: 99.2518% ( 3) 00:10:43.907 17.825 - 17.920: 99.3052% ( 7) 00:10:43.907 17.920 - 18.015: 99.3663% ( 8) 00:10:43.907 18.015 - 18.110: 99.3969% ( 4) 00:10:43.907 18.110 - 18.204: 99.4656% ( 9) 00:10:43.907 18.204 - 18.299: 99.5648% ( 13) 00:10:43.907 18.299 - 18.394: 99.6030% ( 5) 00:10:43.907 18.394 - 18.489: 99.6259% ( 3) 00:10:43.907 18.489 - 18.584: 99.6641% ( 5) 00:10:43.907 18.584 - 18.679: 99.7099% ( 6) 00:10:43.907 18.679 - 18.773: 99.7175% ( 1) 00:10:43.907 18.773 - 18.868: 99.7328% ( 2) 00:10:43.907 18.868 - 18.963: 99.7786% ( 6) 00:10:43.907 18.963 - 19.058: 99.8015% ( 3) 00:10:43.907 19.058 - 19.153: 99.8168% ( 2) 00:10:43.907 19.153 - 19.247: 99.8320% ( 2) 00:10:43.907 19.247 - 19.342: 99.8397% ( 1) 00:10:43.907 19.342 - 19.437: 99.8473% ( 1) 00:10:43.908 19.532 - 19.627: 99.8549% ( 1) 00:10:43.908 20.859 - 20.954: 99.8626% ( 1) 00:10:43.908 20.954 - 21.049: 99.8702% ( 1) 00:10:43.908 21.049 - 21.144: 99.8778% ( 1) 00:10:43.908 23.040 - 23.135: 99.8855% ( 1) 00:10:43.908 23.609 - 23.704: 99.9007% ( 2) 00:10:43.908 87.230 - 87.609: 99.9084% ( 1) 00:10:43.908 3980.705 - 4004.978: 99.9695% ( 8) 00:10:43.908 4004.978 - 4029.250: 100.0000% ( 4) 00:10:43.908 00:10:43.908 Complete histogram 00:10:43.908 ================== 00:10:43.908 Range in us Cumulative Count 00:10:43.908 2.050 - 2.062: 0.3512% ( 46) 00:10:43.908 2.062 - 2.074: 11.5361% ( 1465) 00:10:43.908 2.074 - 2.086: 20.2092% ( 1136) 00:10:43.908 2.086 - 2.098: 26.7980% ( 863) 00:10:43.908 2.098 - 2.110: 50.6642% ( 3126) 00:10:43.908 2.110 - 2.121: 58.2226% ( 990) 00:10:43.908 2.121 - 2.133: 60.6658% ( 320) 00:10:43.908 2.133 - 2.145: 65.4069% ( 621) 00:10:43.908 2.145 - 2.157: 67.6821% ( 298) 00:10:43.908 2.157 - 2.169: 71.5911% ( 512) 00:10:43.908 2.169 - 2.181: 78.2868% ( 877) 00:10:43.908 2.181 - 2.193: 80.7910% ( 328) 00:10:43.908 2.193 - 2.204: 81.7377% ( 124) 00:10:43.908 2.204 - 2.216: 83.7456% ( 263) 00:10:43.908 2.216 - 2.228: 86.0131% ( 297) 00:10:43.908 2.228 - 2.240: 88.0134% ( 262) 00:10:43.908 2.240 - 2.252: 91.6094% ( 471) 00:10:43.908 2.252 - 2.264: 93.3807% ( 232) 00:10:43.908 2.264 - 2.276: 93.8540% ( 62) 00:10:43.908 2.276 - 2.287: 94.2816% ( 56) 00:10:43.908 2.287 - 2.299: 94.8694% ( 77) 00:10:43.908 2.299 - 2.311: 95.1825% ( 41) 00:10:43.908 2.311 - 2.323: 95.3581% ( 23) 00:10:43.908 2.323 - 2.335: 95.5489% ( 25) 00:10:43.908 2.335 - 2.347: 95.6177% ( 9) 00:10:43.908 2.347 - 2.359: 95.7016% ( 11) 00:10:43.908 2.359 - 2.370: 95.7856% ( 11) 00:10:43.908 2.370 - 2.382: 96.0070% ( 29) 00:10:43.908 2.382 - 2.394: 96.2132% ( 27) 00:10:43.908 2.394 - 2.406: 96.5033% ( 38) 00:10:43.908 2.406 - 2.418: 96.7094% ( 27) 00:10:43.908 2.418 - 2.430: 96.9919% ( 37) 00:10:43.908 2.430 - 2.441: 97.3126% ( 42) 00:10:43.908 2.441 - 2.453: 97.4882% ( 23) 00:10:43.908 2.453 - 2.465: 97.6180% ( 17) 00:10:43.908 2.465 - 2.477: 97.7783% ( 21) 00:10:43.908 2.477 - 2.489: 97.9310% ( 20) 00:10:43.908 2.489 - 2.501: 98.0531% ( 16) 00:10:43.908 2.501 - 2.513: 98.1142% ( 8) 00:10:43.908 2.513 - 2.524: 98.2211% ( 14) 00:10:43.908 2.524 - 2.536: 98.2974% ( 10) 00:10:43.908 2.536 - 2.548: 98.3585% ( 8) 00:10:43.908 2.548 - 2.560: 98.4196% ( 8) 00:10:43.908 2.560 - 2.572: 98.4578% ( 5) 00:10:43.908 2.572 - 2.584: 98.4730% ( 2) 00:10:43.908 2.584 - 2.596: 98.4960% ( 3) 00:10:43.908 2.596 - 2.607: 98.5112% ( 2) 00:10:43.908 2.619 - 2.631: 98.5341% ( 3) 00:10:43.908 2.631 - 2.643: 98.5418% ( 1) 00:10:43.908 2.643 - 2.655: 98.5494% ( 1) 00:10:43.908 2.679 - 2.690: 98.5570% ( 1) 00:10:43.908 2.714 - 2.726: 98.5647% ( 1) 00:10:43.908 2.726 - 2.738: 98.5799% ( 2) 00:10:43.908 2.750 - 2.761: 98.5876% ( 1) 00:10:43.908 2.761 - 2.773: 98.5952% ( 1) 00:10:43.908 2.773 - 2.785: 98.6028% ( 1) 00:10:43.908 2.939 - 2.951: 98.6105% ( 1) 00:10:43.908 3.437 - 3.461: 98.6181% ( 1) 00:10:43.908 3.484 - 3.508: 98.6486% ( 4) 00:10:43.908 3.532 - 3.556: 98.6716% ( 3) 00:10:43.908 3.579 - 3.603: 98.6945% ( 3) 00:10:43.908 3.603 - 3.627: 98.7021% ( 1) 00:10:43.908 3.650 - 3.674: 98.7174% ( 2) 00:10:43.908 3.698 - 3.721: 98.7250% ( 1) 00:10:43.908 3.721 - 3.745: 98.7326% ( 1) 00:10:43.908 3.745 - 3.769: 98.7403% ( 1) 00:10:43.908 3.769 - 3.793: 98.7632% ( 3) 00:10:43.908 3.840 - 3.864: 98.7708% ( 1) 00:10:43.908 3.959 - 3.982: 98.7784% ( 1) 00:10:43.908 4.006 - 4.030: 98.7861% ( 1) 00:10:43.908 4.030 - 4.053: 98.8013% ( 2) 00:10:43.908 4.077 - 4.101: 98.8166% ( 2) 00:10:43.908 4.409 - 4.433: 98.8242% ( 1) 00:10:43.908 5.618 - 5.641: 9[2024-07-16 01:03:59.478753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:43.908 8.8395% ( 2) 00:10:43.908 5.807 - 5.831: 98.8472% ( 1) 00:10:43.908 6.400 - 6.447: 98.8548% ( 1) 00:10:43.908 6.495 - 6.542: 98.8624% ( 1) 00:10:43.908 6.542 - 6.590: 98.8701% ( 1) 00:10:43.908 6.590 - 6.637: 98.8777% ( 1) 00:10:43.908 6.637 - 6.684: 98.8853% ( 1) 00:10:43.908 6.732 - 6.779: 98.8930% ( 1) 00:10:43.908 6.874 - 6.921: 98.9082% ( 2) 00:10:43.908 7.396 - 7.443: 98.9159% ( 1) 00:10:43.908 8.059 - 8.107: 98.9235% ( 1) 00:10:43.908 8.201 - 8.249: 98.9311% ( 1) 00:10:43.908 8.865 - 8.913: 98.9388% ( 1) 00:10:43.908 9.244 - 9.292: 98.9464% ( 1) 00:10:43.908 10.003 - 10.050: 98.9540% ( 1) 00:10:43.908 10.856 - 10.904: 98.9617% ( 1) 00:10:43.908 15.455 - 15.550: 98.9769% ( 2) 00:10:43.908 15.550 - 15.644: 98.9846% ( 1) 00:10:43.908 15.644 - 15.739: 98.9922% ( 1) 00:10:43.908 15.739 - 15.834: 99.0075% ( 2) 00:10:43.908 15.834 - 15.929: 99.0228% ( 2) 00:10:43.908 15.929 - 16.024: 99.0304% ( 1) 00:10:43.908 16.024 - 16.119: 99.0457% ( 2) 00:10:43.908 16.119 - 16.213: 99.0762% ( 4) 00:10:43.908 16.213 - 16.308: 99.0991% ( 3) 00:10:43.908 16.308 - 16.403: 99.1296% ( 4) 00:10:43.908 16.403 - 16.498: 99.1754% ( 6) 00:10:43.908 16.498 - 16.593: 99.2289% ( 7) 00:10:43.908 16.593 - 16.687: 99.2594% ( 4) 00:10:43.908 16.687 - 16.782: 99.2671% ( 1) 00:10:43.908 16.782 - 16.877: 99.3052% ( 5) 00:10:43.908 16.877 - 16.972: 99.3281% ( 3) 00:10:43.908 16.972 - 17.067: 99.3434% ( 2) 00:10:43.908 17.067 - 17.161: 99.3510% ( 1) 00:10:43.908 17.446 - 17.541: 99.3587% ( 1) 00:10:43.908 17.730 - 17.825: 99.3663% ( 1) 00:10:43.908 18.015 - 18.110: 99.3740% ( 1) 00:10:43.908 18.394 - 18.489: 99.3892% ( 2) 00:10:43.908 18.584 - 18.679: 99.4045% ( 2) 00:10:43.908 26.169 - 26.359: 99.4121% ( 1) 00:10:43.908 3980.705 - 4004.978: 99.8320% ( 55) 00:10:43.908 4004.978 - 4029.250: 100.0000% ( 22) 00:10:43.908 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:43.908 [ 00:10:43.908 { 00:10:43.908 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:43.908 "subtype": "Discovery", 00:10:43.908 "listen_addresses": [], 00:10:43.908 "allow_any_host": true, 00:10:43.908 "hosts": [] 00:10:43.908 }, 00:10:43.908 { 00:10:43.908 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:43.908 "subtype": "NVMe", 00:10:43.908 "listen_addresses": [ 00:10:43.908 { 00:10:43.908 "trtype": "VFIOUSER", 00:10:43.908 "adrfam": "IPv4", 00:10:43.908 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:43.908 "trsvcid": "0" 00:10:43.908 } 00:10:43.908 ], 00:10:43.908 "allow_any_host": true, 00:10:43.908 "hosts": [], 00:10:43.908 "serial_number": "SPDK1", 00:10:43.908 "model_number": "SPDK bdev Controller", 00:10:43.908 "max_namespaces": 32, 00:10:43.908 "min_cntlid": 1, 00:10:43.908 "max_cntlid": 65519, 00:10:43.908 "namespaces": [ 00:10:43.908 { 00:10:43.908 "nsid": 1, 00:10:43.908 "bdev_name": "Malloc1", 00:10:43.908 "name": "Malloc1", 00:10:43.908 "nguid": "7E0C3779AA834F1184159776B8CE81CE", 00:10:43.908 "uuid": "7e0c3779-aa83-4f11-8415-9776b8ce81ce" 00:10:43.908 }, 00:10:43.908 { 00:10:43.908 "nsid": 2, 00:10:43.908 "bdev_name": "Malloc3", 00:10:43.908 "name": "Malloc3", 00:10:43.908 "nguid": "21C718677DC445CBAF6A1BDEBC15669C", 00:10:43.908 "uuid": "21c71867-7dc4-45cb-af6a-1bdebc15669c" 00:10:43.908 } 00:10:43.908 ] 00:10:43.908 }, 00:10:43.908 { 00:10:43.908 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:43.908 "subtype": "NVMe", 00:10:43.908 "listen_addresses": [ 00:10:43.908 { 00:10:43.908 "trtype": "VFIOUSER", 00:10:43.908 "adrfam": "IPv4", 00:10:43.908 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:43.908 "trsvcid": "0" 00:10:43.908 } 00:10:43.908 ], 00:10:43.908 "allow_any_host": true, 00:10:43.908 "hosts": [], 00:10:43.908 "serial_number": "SPDK2", 00:10:43.908 "model_number": "SPDK bdev Controller", 00:10:43.908 "max_namespaces": 32, 00:10:43.908 "min_cntlid": 1, 00:10:43.908 "max_cntlid": 65519, 00:10:43.908 "namespaces": [ 00:10:43.908 { 00:10:43.908 "nsid": 1, 00:10:43.908 "bdev_name": "Malloc2", 00:10:43.908 "name": "Malloc2", 00:10:43.908 "nguid": "B0D80C2A99B5402D8CD6696489CD2B9C", 00:10:43.908 "uuid": "b0d80c2a-99b5-402d-8cd6-696489cd2b9c" 00:10:43.908 } 00:10:43.908 ] 00:10:43.908 } 00:10:43.908 ] 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4099124 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:43.908 01:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:43.908 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.165 [2024-07-16 01:03:59.977417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:44.165 Malloc4 00:10:44.165 01:04:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:44.446 [2024-07-16 01:04:00.358438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:44.446 01:04:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:44.446 Asynchronous Event Request test 00:10:44.446 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:44.446 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:44.446 Registering asynchronous event callbacks... 00:10:44.446 Starting namespace attribute notice tests for all controllers... 00:10:44.446 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:44.446 aer_cb - Changed Namespace 00:10:44.446 Cleaning up... 00:10:44.705 [ 00:10:44.705 { 00:10:44.705 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:44.705 "subtype": "Discovery", 00:10:44.705 "listen_addresses": [], 00:10:44.705 "allow_any_host": true, 00:10:44.705 "hosts": [] 00:10:44.705 }, 00:10:44.705 { 00:10:44.705 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:44.705 "subtype": "NVMe", 00:10:44.705 "listen_addresses": [ 00:10:44.705 { 00:10:44.705 "trtype": "VFIOUSER", 00:10:44.705 "adrfam": "IPv4", 00:10:44.705 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:44.705 "trsvcid": "0" 00:10:44.705 } 00:10:44.705 ], 00:10:44.705 "allow_any_host": true, 00:10:44.705 "hosts": [], 00:10:44.705 "serial_number": "SPDK1", 00:10:44.705 "model_number": "SPDK bdev Controller", 00:10:44.705 "max_namespaces": 32, 00:10:44.705 "min_cntlid": 1, 00:10:44.705 "max_cntlid": 65519, 00:10:44.705 "namespaces": [ 00:10:44.705 { 00:10:44.705 "nsid": 1, 00:10:44.705 "bdev_name": "Malloc1", 00:10:44.705 "name": "Malloc1", 00:10:44.705 "nguid": "7E0C3779AA834F1184159776B8CE81CE", 00:10:44.705 "uuid": "7e0c3779-aa83-4f11-8415-9776b8ce81ce" 00:10:44.705 }, 00:10:44.705 { 00:10:44.705 "nsid": 2, 00:10:44.705 "bdev_name": "Malloc3", 00:10:44.705 "name": "Malloc3", 00:10:44.705 "nguid": "21C718677DC445CBAF6A1BDEBC15669C", 00:10:44.705 "uuid": "21c71867-7dc4-45cb-af6a-1bdebc15669c" 00:10:44.705 } 00:10:44.705 ] 00:10:44.705 }, 00:10:44.705 { 00:10:44.705 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:44.705 "subtype": "NVMe", 00:10:44.705 "listen_addresses": [ 00:10:44.705 { 00:10:44.705 "trtype": "VFIOUSER", 00:10:44.705 "adrfam": "IPv4", 00:10:44.705 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:44.705 "trsvcid": "0" 00:10:44.705 } 00:10:44.705 ], 00:10:44.705 "allow_any_host": true, 00:10:44.705 "hosts": [], 00:10:44.705 "serial_number": "SPDK2", 00:10:44.705 "model_number": "SPDK bdev Controller", 00:10:44.705 "max_namespaces": 32, 00:10:44.705 "min_cntlid": 1, 00:10:44.705 "max_cntlid": 65519, 00:10:44.705 "namespaces": [ 00:10:44.705 { 00:10:44.705 "nsid": 1, 00:10:44.705 "bdev_name": "Malloc2", 00:10:44.705 "name": "Malloc2", 00:10:44.705 "nguid": "B0D80C2A99B5402D8CD6696489CD2B9C", 00:10:44.705 "uuid": "b0d80c2a-99b5-402d-8cd6-696489cd2b9c" 00:10:44.705 }, 00:10:44.705 { 00:10:44.705 "nsid": 2, 00:10:44.705 "bdev_name": "Malloc4", 00:10:44.705 "name": "Malloc4", 00:10:44.705 "nguid": "4BA74270C8F64FEDAB864B077F02BC8F", 00:10:44.705 "uuid": "4ba74270-c8f6-4fed-ab86-4b077f02bc8f" 00:10:44.705 } 00:10:44.705 ] 00:10:44.705 } 00:10:44.705 ] 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4099124 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4093612 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4093612 ']' 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4093612 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4093612 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4093612' 00:10:44.705 killing process with pid 4093612 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4093612 00:10:44.705 01:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4093612 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4099264 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4099264' 00:10:45.270 Process pid: 4099264 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4099264 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4099264 ']' 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.270 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:45.270 [2024-07-16 01:04:01.067843] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:45.270 [2024-07-16 01:04:01.068864] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:10:45.270 [2024-07-16 01:04:01.068921] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.270 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.270 [2024-07-16 01:04:01.128714] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.270 [2024-07-16 01:04:01.238372] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.270 [2024-07-16 01:04:01.238426] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.270 [2024-07-16 01:04:01.238464] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.270 [2024-07-16 01:04:01.238477] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.270 [2024-07-16 01:04:01.238487] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.270 [2024-07-16 01:04:01.238627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.270 [2024-07-16 01:04:01.238694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.270 [2024-07-16 01:04:01.238754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.270 [2024-07-16 01:04:01.238752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.528 [2024-07-16 01:04:01.347068] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:45.528 [2024-07-16 01:04:01.347309] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:45.528 [2024-07-16 01:04:01.347612] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:45.528 [2024-07-16 01:04:01.348230] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:45.528 [2024-07-16 01:04:01.348465] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:45.528 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.528 01:04:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:45.528 01:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:46.459 01:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:46.718 01:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:46.718 01:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:46.718 01:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:46.718 01:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:46.718 01:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:46.976 Malloc1 00:10:46.976 01:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:47.233 01:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:47.490 01:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:47.746 01:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:47.746 01:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:47.746 01:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:48.003 Malloc2 00:10:48.003 01:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:48.260 01:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:48.518 01:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4099264 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4099264 ']' 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4099264 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099264 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099264' 00:10:48.775 killing process with pid 4099264 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4099264 00:10:48.775 01:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4099264 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:49.339 00:10:49.339 real 0m52.638s 00:10:49.339 user 3m27.583s 00:10:49.339 sys 0m4.442s 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:49.339 ************************************ 00:10:49.339 END TEST nvmf_vfio_user 00:10:49.339 ************************************ 00:10:49.339 01:04:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:49.339 01:04:05 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:49.339 01:04:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:49.339 01:04:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.339 01:04:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:49.339 ************************************ 00:10:49.339 START TEST nvmf_vfio_user_nvme_compliance 00:10:49.339 ************************************ 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:49.339 * Looking for test storage... 00:10:49.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4099867 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4099867' 00:10:49.339 Process pid: 4099867 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4099867 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 4099867 ']' 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.339 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:49.339 [2024-07-16 01:04:05.228930] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:10:49.339 [2024-07-16 01:04:05.229061] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.339 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.340 [2024-07-16 01:04:05.286853] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.596 [2024-07-16 01:04:05.392378] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.596 [2024-07-16 01:04:05.392435] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.596 [2024-07-16 01:04:05.392464] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.596 [2024-07-16 01:04:05.392474] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.596 [2024-07-16 01:04:05.392484] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.596 [2024-07-16 01:04:05.392615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.596 [2024-07-16 01:04:05.392680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.596 [2024-07-16 01:04:05.392682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.596 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.596 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:49.596 01:04:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.525 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:50.782 malloc0 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.782 01:04:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:50.782 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.782 00:10:50.782 00:10:50.782 CUnit - A unit testing framework for C - Version 2.1-3 00:10:50.782 http://cunit.sourceforge.net/ 00:10:50.782 00:10:50.782 00:10:50.782 Suite: nvme_compliance 00:10:50.782 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-16 01:04:06.727470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:50.782 [2024-07-16 01:04:06.728907] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:50.782 [2024-07-16 01:04:06.728945] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:50.782 [2024-07-16 01:04:06.728965] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:50.782 [2024-07-16 01:04:06.730482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:50.782 passed 00:10:51.039 Test: admin_identify_ctrlr_verify_fused ...[2024-07-16 01:04:06.815122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.039 [2024-07-16 01:04:06.818146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.039 passed 00:10:51.039 Test: admin_identify_ns ...[2024-07-16 01:04:06.907519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.039 [2024-07-16 01:04:06.967976] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:51.039 [2024-07-16 01:04:06.975969] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:51.039 [2024-07-16 01:04:06.997115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.039 passed 00:10:51.295 Test: admin_get_features_mandatory_features ...[2024-07-16 01:04:07.078971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.295 [2024-07-16 01:04:07.081997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.295 passed 00:10:51.295 Test: admin_get_features_optional_features ...[2024-07-16 01:04:07.166555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.295 [2024-07-16 01:04:07.169583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.295 passed 00:10:51.295 Test: admin_set_features_number_of_queues ...[2024-07-16 01:04:07.255630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.552 [2024-07-16 01:04:07.361076] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.552 passed 00:10:51.552 Test: admin_get_log_page_mandatory_logs ...[2024-07-16 01:04:07.441704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.552 [2024-07-16 01:04:07.446738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.552 passed 00:10:51.552 Test: admin_get_log_page_with_lpo ...[2024-07-16 01:04:07.529509] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.808 [2024-07-16 01:04:07.596971] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:51.808 [2024-07-16 01:04:07.610077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.808 passed 00:10:51.808 Test: fabric_property_get ...[2024-07-16 01:04:07.691288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.808 [2024-07-16 01:04:07.693571] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:51.808 [2024-07-16 01:04:07.695321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:51.808 passed 00:10:51.808 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-16 01:04:07.780850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:51.808 [2024-07-16 01:04:07.782176] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:51.808 [2024-07-16 01:04:07.783872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:52.065 passed 00:10:52.065 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-16 01:04:07.866604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:52.065 [2024-07-16 01:04:07.950982] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:52.065 [2024-07-16 01:04:07.966982] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:52.065 [2024-07-16 01:04:07.972088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:52.065 passed 00:10:52.065 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-16 01:04:08.056762] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:52.065 [2024-07-16 01:04:08.058092] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:52.321 [2024-07-16 01:04:08.059800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:52.321 passed 00:10:52.321 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-16 01:04:08.139993] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:52.321 [2024-07-16 01:04:08.215985] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:52.321 [2024-07-16 01:04:08.241977] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:52.321 [2024-07-16 01:04:08.247089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:52.321 passed 00:10:52.578 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-16 01:04:08.330634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:52.578 [2024-07-16 01:04:08.331981] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:52.578 [2024-07-16 01:04:08.332019] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:52.578 [2024-07-16 01:04:08.333656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:52.578 passed 00:10:52.578 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-16 01:04:08.419057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:52.578 [2024-07-16 01:04:08.511980] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:52.578 [2024-07-16 01:04:08.519969] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:52.578 [2024-07-16 01:04:08.527965] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:52.578 [2024-07-16 01:04:08.535974] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:52.578 [2024-07-16 01:04:08.565092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:52.835 passed 00:10:52.835 Test: admin_create_io_sq_verify_pc ...[2024-07-16 01:04:08.646062] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:52.835 [2024-07-16 01:04:08.664992] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:52.835 [2024-07-16 01:04:08.682498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:52.835 passed 00:10:52.835 Test: admin_create_io_qp_max_qps ...[2024-07-16 01:04:08.764095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:54.207 [2024-07-16 01:04:09.868973] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:54.464 [2024-07-16 01:04:10.256622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:54.464 passed 00:10:54.464 Test: admin_create_io_sq_shared_cq ...[2024-07-16 01:04:10.341534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:54.721 [2024-07-16 01:04:10.473963] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:54.721 [2024-07-16 01:04:10.511062] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:54.721 passed 00:10:54.721 00:10:54.721 Run Summary: Type Total Ran Passed Failed Inactive 00:10:54.721 suites 1 1 n/a 0 0 00:10:54.721 tests 18 18 18 0 0 00:10:54.721 asserts 360 360 360 0 n/a 00:10:54.721 00:10:54.721 Elapsed time = 1.568 seconds 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4099867 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 4099867 ']' 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 4099867 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099867 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099867' 00:10:54.721 killing process with pid 4099867 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 4099867 00:10:54.721 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 4099867 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:54.978 00:10:54.978 real 0m5.729s 00:10:54.978 user 0m16.059s 00:10:54.978 sys 0m0.520s 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:54.978 ************************************ 00:10:54.978 END TEST nvmf_vfio_user_nvme_compliance 00:10:54.978 ************************************ 00:10:54.978 01:04:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:54.978 01:04:10 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:54.978 01:04:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.978 01:04:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.978 01:04:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.978 ************************************ 00:10:54.978 START TEST nvmf_vfio_user_fuzz 00:10:54.978 ************************************ 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:54.978 * Looking for test storage... 00:10:54.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:54.978 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.979 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4100592 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4100592' 00:10:55.236 Process pid: 4100592 00:10:55.236 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:55.237 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4100592 00:10:55.237 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4100592 ']' 00:10:55.237 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.237 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.237 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.237 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.237 01:04:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:55.494 01:04:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.494 01:04:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:55.494 01:04:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:56.426 malloc0 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:56.426 01:04:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:28.495 Fuzzing completed. Shutting down the fuzz application 00:11:28.495 00:11:28.495 Dumping successful admin opcodes: 00:11:28.495 8, 9, 10, 24, 00:11:28.495 Dumping successful io opcodes: 00:11:28.495 0, 00:11:28.495 NS: 0x200003a1ef00 I/O qp, Total commands completed: 641616, total successful commands: 2489, random_seed: 2283034176 00:11:28.495 NS: 0x200003a1ef00 admin qp, Total commands completed: 81479, total successful commands: 651, random_seed: 3413859136 00:11:28.495 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:28.495 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.495 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.495 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.495 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4100592 00:11:28.495 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4100592 ']' 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 4100592 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4100592 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4100592' 00:11:28.496 killing process with pid 4100592 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 4100592 00:11:28.496 01:04:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 4100592 00:11:28.496 01:04:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:28.496 01:04:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:28.496 00:11:28.496 real 0m33.271s 00:11:28.496 user 0m32.520s 00:11:28.496 sys 0m29.395s 00:11:28.496 01:04:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.496 01:04:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.496 ************************************ 00:11:28.496 END TEST nvmf_vfio_user_fuzz 00:11:28.496 ************************************ 00:11:28.496 01:04:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:28.496 01:04:44 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:28.496 01:04:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:28.496 01:04:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.496 01:04:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.496 ************************************ 00:11:28.496 START TEST nvmf_host_management 00:11:28.496 ************************************ 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:28.496 * Looking for test storage... 00:11:28.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.496 01:04:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:30.403 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:30.403 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:30.403 Found net devices under 0000:09:00.0: cvl_0_0 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:30.403 Found net devices under 0000:09:00.1: cvl_0_1 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.403 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:30.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:11:30.662 00:11:30.662 --- 10.0.0.2 ping statistics --- 00:11:30.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.662 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:30.662 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:11:30.663 00:11:30.663 --- 10.0.0.1 ping statistics --- 00:11:30.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.663 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4106173 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4106173 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4106173 ']' 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.663 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.663 [2024-07-16 01:04:46.558267] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:11:30.663 [2024-07-16 01:04:46.558368] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.663 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.663 [2024-07-16 01:04:46.624553] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.921 [2024-07-16 01:04:46.737898] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.921 [2024-07-16 01:04:46.737979] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.921 [2024-07-16 01:04:46.737996] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.921 [2024-07-16 01:04:46.738007] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.921 [2024-07-16 01:04:46.738017] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.921 [2024-07-16 01:04:46.738098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.921 [2024-07-16 01:04:46.738175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.921 [2024-07-16 01:04:46.738245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:30.921 [2024-07-16 01:04:46.738248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.921 [2024-07-16 01:04:46.890799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.921 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.203 Malloc0 00:11:31.203 [2024-07-16 01:04:46.949596] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.203 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.203 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:31.203 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4106226 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4106226 /var/tmp/bdevperf.sock 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4106226 ']' 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:31.204 { 00:11:31.204 "params": { 00:11:31.204 "name": "Nvme$subsystem", 00:11:31.204 "trtype": "$TEST_TRANSPORT", 00:11:31.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.204 "adrfam": "ipv4", 00:11:31.204 "trsvcid": "$NVMF_PORT", 00:11:31.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.204 "hdgst": ${hdgst:-false}, 00:11:31.204 "ddgst": ${ddgst:-false} 00:11:31.204 }, 00:11:31.204 "method": "bdev_nvme_attach_controller" 00:11:31.204 } 00:11:31.204 EOF 00:11:31.204 )") 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:31.204 01:04:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:31.204 "params": { 00:11:31.204 "name": "Nvme0", 00:11:31.204 "trtype": "tcp", 00:11:31.204 "traddr": "10.0.0.2", 00:11:31.204 "adrfam": "ipv4", 00:11:31.204 "trsvcid": "4420", 00:11:31.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:31.204 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:31.204 "hdgst": false, 00:11:31.204 "ddgst": false 00:11:31.204 }, 00:11:31.204 "method": "bdev_nvme_attach_controller" 00:11:31.204 }' 00:11:31.204 [2024-07-16 01:04:47.021038] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:11:31.204 [2024-07-16 01:04:47.021131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106226 ] 00:11:31.204 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.204 [2024-07-16 01:04:47.080674] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.204 [2024-07-16 01:04:47.192569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.462 Running I/O for 10 seconds... 00:11:31.462 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.462 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:31.462 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:31.462 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.462 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:31.720 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.979 [2024-07-16 01:04:47.812497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef690 is same with the state(5) to be set 00:11:31.979 [2024-07-16 01:04:47.812582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef690 is same with the state(5) to be set 00:11:31.979 [2024-07-16 01:04:47.812598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef690 is same with the state(5) to be set 00:11:31.979 [2024-07-16 01:04:47.812610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef690 is same with the state(5) to be set 00:11:31.979 [2024-07-16 01:04:47.812622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef690 is same with the state(5) to be set 00:11:31.979 [2024-07-16 01:04:47.812634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef690 is same with the state(5) to be set 00:11:31.979 [2024-07-16 01:04:47.812659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef690 is same with the state(5) to be set 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.979 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.979 [2024-07-16 01:04:47.822090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.979 [2024-07-16 01:04:47.822132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.979 [2024-07-16 01:04:47.822165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.979 [2024-07-16 01:04:47.822195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.979 [2024-07-16 01:04:47.822225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693980 is same with the state(5) to be set 00:11:31.979 [2024-07-16 01:04:47.822306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.979 [2024-07-16 01:04:47.822338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.979 [2024-07-16 01:04:47.822381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.979 [2024-07-16 01:04:47.822413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.979 [2024-07-16 01:04:47.822444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.979 [2024-07-16 01:04:47.822474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.979 [2024-07-16 01:04:47.822491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.822976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.822994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.980 [2024-07-16 01:04:47.823861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.980 [2024-07-16 01:04:47.823878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.823894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.823911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.823927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.823944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.823964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.823982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.823997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.981 [2024-07-16 01:04:47.824457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.981 [2024-07-16 01:04:47.824551] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaa4ca0 was disconnected and freed. reset controller. 00:11:31.981 01:04:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.981 01:04:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:31.981 [2024-07-16 01:04:47.825690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:31.981 task offset: 81920 on job bdev=Nvme0n1 fails 00:11:31.981 00:11:31.981 Latency(us) 00:11:31.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.981 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:31.981 Job: Nvme0n1 ended in about 0.41 seconds with error 00:11:31.981 Verification LBA range: start 0x0 length 0x400 00:11:31.981 Nvme0n1 : 0.41 1565.64 97.85 156.56 0.00 36108.56 3082.62 33981.63 00:11:31.981 =================================================================================================================== 00:11:31.981 Total : 1565.64 97.85 156.56 0.00 36108.56 3082.62 33981.63 00:11:31.981 [2024-07-16 01:04:47.827564] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.981 [2024-07-16 01:04:47.827593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693980 (9): Bad file descriptor 00:11:31.981 [2024-07-16 01:04:47.962089] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4106226 00:11:32.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4106226) - No such process 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:32.914 { 00:11:32.914 "params": { 00:11:32.914 "name": "Nvme$subsystem", 00:11:32.914 "trtype": "$TEST_TRANSPORT", 00:11:32.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:32.914 "adrfam": "ipv4", 00:11:32.914 "trsvcid": "$NVMF_PORT", 00:11:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:32.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:32.914 "hdgst": ${hdgst:-false}, 00:11:32.914 "ddgst": ${ddgst:-false} 00:11:32.914 }, 00:11:32.914 "method": "bdev_nvme_attach_controller" 00:11:32.914 } 00:11:32.914 EOF 00:11:32.914 )") 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:32.914 01:04:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:32.914 "params": { 00:11:32.914 "name": "Nvme0", 00:11:32.914 "trtype": "tcp", 00:11:32.914 "traddr": "10.0.0.2", 00:11:32.914 "adrfam": "ipv4", 00:11:32.914 "trsvcid": "4420", 00:11:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:32.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:32.914 "hdgst": false, 00:11:32.914 "ddgst": false 00:11:32.914 }, 00:11:32.914 "method": "bdev_nvme_attach_controller" 00:11:32.914 }' 00:11:32.914 [2024-07-16 01:04:48.873348] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:11:32.914 [2024-07-16 01:04:48.873420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106496 ] 00:11:32.914 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.171 [2024-07-16 01:04:48.935180] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.171 [2024-07-16 01:04:49.049931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.428 Running I/O for 1 seconds... 00:11:34.359 00:11:34.359 Latency(us) 00:11:34.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.360 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:34.360 Verification LBA range: start 0x0 length 0x400 00:11:34.360 Nvme0n1 : 1.01 1668.87 104.30 0.00 0.00 37568.42 2075.31 33204.91 00:11:34.360 =================================================================================================================== 00:11:34.360 Total : 1668.87 104.30 0.00 0.00 37568.42 2075.31 33204.91 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.617 rmmod nvme_tcp 00:11:34.617 rmmod nvme_fabrics 00:11:34.617 rmmod nvme_keyring 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4106173 ']' 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4106173 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4106173 ']' 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4106173 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4106173 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4106173' 00:11:34.617 killing process with pid 4106173 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4106173 00:11:34.617 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4106173 00:11:34.874 [2024-07-16 01:04:50.865473] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.133 01:04:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.041 01:04:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.041 01:04:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:37.041 00:11:37.041 real 0m8.718s 00:11:37.041 user 0m19.366s 00:11:37.041 sys 0m2.770s 00:11:37.041 01:04:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.041 01:04:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.041 ************************************ 00:11:37.041 END TEST nvmf_host_management 00:11:37.041 ************************************ 00:11:37.041 01:04:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:37.041 01:04:52 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:37.041 01:04:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.041 01:04:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.041 01:04:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.041 ************************************ 00:11:37.041 START TEST nvmf_lvol 00:11:37.041 ************************************ 00:11:37.041 01:04:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:37.300 * Looking for test storage... 00:11:37.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.300 01:04:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.301 01:04:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.205 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.205 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:39.206 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:39.206 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:39.206 Found net devices under 0000:09:00.0: cvl_0_0 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:39.206 Found net devices under 0000:09:00.1: cvl_0_1 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.206 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:39.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:11:39.465 00:11:39.465 --- 10.0.0.2 ping statistics --- 00:11:39.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.465 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:11:39.465 00:11:39.465 --- 10.0.0.1 ping statistics --- 00:11:39.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.465 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4108716 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4108716 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4108716 ']' 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.465 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.465 [2024-07-16 01:04:55.359764] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:11:39.465 [2024-07-16 01:04:55.359841] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.465 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.465 [2024-07-16 01:04:55.423222] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.723 [2024-07-16 01:04:55.529928] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.723 [2024-07-16 01:04:55.529990] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.723 [2024-07-16 01:04:55.530013] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.723 [2024-07-16 01:04:55.530024] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.723 [2024-07-16 01:04:55.530034] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.723 [2024-07-16 01:04:55.530094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.723 [2024-07-16 01:04:55.530150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.723 [2024-07-16 01:04:55.530154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.723 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.723 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:39.723 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:39.723 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:39.723 01:04:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.724 01:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.724 01:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:39.981 [2024-07-16 01:04:55.880702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.981 01:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.239 01:04:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:40.239 01:04:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.805 01:04:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:40.805 01:04:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:40.805 01:04:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:41.371 01:04:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f2915fd5-6818-4465-a329-d34cab02799c 00:11:41.371 01:04:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f2915fd5-6818-4465-a329-d34cab02799c lvol 20 00:11:41.627 01:04:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6bc5450d-26e7-41f7-95e7-7967178279b7 00:11:41.627 01:04:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:41.884 01:04:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6bc5450d-26e7-41f7-95e7-7967178279b7 00:11:41.884 01:04:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:42.446 [2024-07-16 01:04:58.140546] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.446 01:04:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.446 01:04:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4109023 00:11:42.446 01:04:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:42.446 01:04:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:42.446 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.424 01:04:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6bc5450d-26e7-41f7-95e7-7967178279b7 MY_SNAPSHOT 00:11:43.988 01:04:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1e66dbc0-ee42-4ab6-99be-38a90ae85cf2 00:11:43.988 01:04:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6bc5450d-26e7-41f7-95e7-7967178279b7 30 00:11:44.245 01:05:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1e66dbc0-ee42-4ab6-99be-38a90ae85cf2 MY_CLONE 00:11:44.502 01:05:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ddcfcf69-b9f9-404f-ba79-0c7487065cf4 00:11:44.502 01:05:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ddcfcf69-b9f9-404f-ba79-0c7487065cf4 00:11:45.081 01:05:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4109023 00:11:53.197 Initializing NVMe Controllers 00:11:53.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:53.197 Controller IO queue size 128, less than required. 00:11:53.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:53.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:53.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:53.197 Initialization complete. Launching workers. 00:11:53.197 ======================================================== 00:11:53.197 Latency(us) 00:11:53.197 Device Information : IOPS MiB/s Average min max 00:11:53.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10649.30 41.60 12023.10 2029.06 67007.39 00:11:53.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10706.60 41.82 11963.89 1779.08 97090.38 00:11:53.197 ======================================================== 00:11:53.197 Total : 21355.90 83.42 11993.42 1779.08 97090.38 00:11:53.197 00:11:53.197 01:05:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:53.197 01:05:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6bc5450d-26e7-41f7-95e7-7967178279b7 00:11:53.455 01:05:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2915fd5-6818-4465-a329-d34cab02799c 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.714 rmmod nvme_tcp 00:11:53.714 rmmod nvme_fabrics 00:11:53.714 rmmod nvme_keyring 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4108716 ']' 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4108716 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4108716 ']' 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4108716 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.714 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108716 00:11:53.972 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:53.972 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:53.972 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108716' 00:11:53.972 killing process with pid 4108716 00:11:53.972 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4108716 00:11:53.972 01:05:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4108716 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.232 01:05:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.139 01:05:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:56.139 00:11:56.139 real 0m19.067s 00:11:56.139 user 1m4.934s 00:11:56.139 sys 0m5.632s 00:11:56.139 01:05:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.139 01:05:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:56.139 ************************************ 00:11:56.139 END TEST nvmf_lvol 00:11:56.139 ************************************ 00:11:56.139 01:05:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:56.139 01:05:12 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:56.139 01:05:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:56.139 01:05:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.139 01:05:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:56.139 ************************************ 00:11:56.139 START TEST nvmf_lvs_grow 00:11:56.139 ************************************ 00:11:56.139 01:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:56.398 * Looking for test storage... 00:11:56.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.398 01:05:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:58.298 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:58.298 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:58.298 Found net devices under 0000:09:00.0: cvl_0_0 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:58.298 Found net devices under 0000:09:00.1: cvl_0_1 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.298 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.299 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.299 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.299 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.299 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:11:58.557 00:11:58.557 --- 10.0.0.2 ping statistics --- 00:11:58.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.557 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:58.557 00:11:58.557 --- 10.0.0.1 ping statistics --- 00:11:58.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.557 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4112294 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4112294 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4112294 ']' 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.557 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:58.557 [2024-07-16 01:05:14.403068] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:11:58.557 [2024-07-16 01:05:14.403147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.557 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.557 [2024-07-16 01:05:14.469528] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.815 [2024-07-16 01:05:14.571215] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.815 [2024-07-16 01:05:14.571288] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.815 [2024-07-16 01:05:14.571311] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.815 [2024-07-16 01:05:14.571322] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.815 [2024-07-16 01:05:14.571332] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.815 [2024-07-16 01:05:14.571355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.815 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.815 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:58.815 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.815 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.815 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:58.815 01:05:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.815 01:05:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:59.073 [2024-07-16 01:05:14.933332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:59.073 ************************************ 00:11:59.073 START TEST lvs_grow_clean 00:11:59.073 ************************************ 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:59.073 01:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:59.330 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:59.330 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:59.587 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc8ae378-3a4f-4b19-97c9-996097052e94 00:11:59.587 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:11:59.587 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:59.845 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:59.845 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:59.845 01:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc8ae378-3a4f-4b19-97c9-996097052e94 lvol 150 00:12:00.101 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bfc4a3ce-8785-4017-abab-6c72fa16739e 00:12:00.101 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:00.101 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:00.358 [2024-07-16 01:05:16.299073] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:00.358 [2024-07-16 01:05:16.299153] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:00.358 true 00:12:00.358 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:00.358 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:00.615 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:00.615 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:00.871 01:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bfc4a3ce-8785-4017-abab-6c72fa16739e 00:12:01.128 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:01.383 [2024-07-16 01:05:17.282129] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.383 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4112724 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4112724 /var/tmp/bdevperf.sock 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4112724 ']' 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:01.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.640 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:01.640 [2024-07-16 01:05:17.630500] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:01.640 [2024-07-16 01:05:17.630606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112724 ] 00:12:01.896 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.896 [2024-07-16 01:05:17.689596] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.896 [2024-07-16 01:05:17.796937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.152 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.152 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:02.152 01:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:02.409 Nvme0n1 00:12:02.409 01:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:02.666 [ 00:12:02.666 { 00:12:02.666 "name": "Nvme0n1", 00:12:02.666 "aliases": [ 00:12:02.666 "bfc4a3ce-8785-4017-abab-6c72fa16739e" 00:12:02.666 ], 00:12:02.666 "product_name": "NVMe disk", 00:12:02.666 "block_size": 4096, 00:12:02.666 "num_blocks": 38912, 00:12:02.666 "uuid": "bfc4a3ce-8785-4017-abab-6c72fa16739e", 00:12:02.666 "assigned_rate_limits": { 00:12:02.666 "rw_ios_per_sec": 0, 00:12:02.666 "rw_mbytes_per_sec": 0, 00:12:02.666 "r_mbytes_per_sec": 0, 00:12:02.666 "w_mbytes_per_sec": 0 00:12:02.666 }, 00:12:02.666 "claimed": false, 00:12:02.666 "zoned": false, 00:12:02.666 "supported_io_types": { 00:12:02.666 "read": true, 00:12:02.666 "write": true, 00:12:02.666 "unmap": true, 00:12:02.666 "flush": true, 00:12:02.666 "reset": true, 00:12:02.666 "nvme_admin": true, 00:12:02.666 "nvme_io": true, 00:12:02.666 "nvme_io_md": false, 00:12:02.666 "write_zeroes": true, 00:12:02.666 "zcopy": false, 00:12:02.666 "get_zone_info": false, 00:12:02.666 "zone_management": false, 00:12:02.666 "zone_append": false, 00:12:02.666 "compare": true, 00:12:02.666 "compare_and_write": true, 00:12:02.666 "abort": true, 00:12:02.666 "seek_hole": false, 00:12:02.666 "seek_data": false, 00:12:02.666 "copy": true, 00:12:02.666 "nvme_iov_md": false 00:12:02.666 }, 00:12:02.666 "memory_domains": [ 00:12:02.666 { 00:12:02.666 "dma_device_id": "system", 00:12:02.666 "dma_device_type": 1 00:12:02.666 } 00:12:02.666 ], 00:12:02.666 "driver_specific": { 00:12:02.666 "nvme": [ 00:12:02.666 { 00:12:02.666 "trid": { 00:12:02.666 "trtype": "TCP", 00:12:02.666 "adrfam": "IPv4", 00:12:02.666 "traddr": "10.0.0.2", 00:12:02.666 "trsvcid": "4420", 00:12:02.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:02.666 }, 00:12:02.666 "ctrlr_data": { 00:12:02.666 "cntlid": 1, 00:12:02.666 "vendor_id": "0x8086", 00:12:02.666 "model_number": "SPDK bdev Controller", 00:12:02.666 "serial_number": "SPDK0", 00:12:02.666 "firmware_revision": "24.09", 00:12:02.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:02.666 "oacs": { 00:12:02.666 "security": 0, 00:12:02.666 "format": 0, 00:12:02.666 "firmware": 0, 00:12:02.666 "ns_manage": 0 00:12:02.666 }, 00:12:02.666 "multi_ctrlr": true, 00:12:02.666 "ana_reporting": false 00:12:02.666 }, 00:12:02.666 "vs": { 00:12:02.666 "nvme_version": "1.3" 00:12:02.666 }, 00:12:02.666 "ns_data": { 00:12:02.666 "id": 1, 00:12:02.666 "can_share": true 00:12:02.666 } 00:12:02.666 } 00:12:02.666 ], 00:12:02.666 "mp_policy": "active_passive" 00:12:02.666 } 00:12:02.666 } 00:12:02.666 ] 00:12:02.666 01:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4112860 00:12:02.666 01:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:02.666 01:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:02.923 Running I/O for 10 seconds... 00:12:03.854 Latency(us) 00:12:03.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.854 Nvme0n1 : 1.00 15182.00 59.30 0.00 0.00 0.00 0.00 0.00 00:12:03.854 =================================================================================================================== 00:12:03.854 Total : 15182.00 59.30 0.00 0.00 0.00 0.00 0.00 00:12:03.854 00:12:04.787 01:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:04.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.787 Nvme0n1 : 2.00 15343.00 59.93 0.00 0.00 0.00 0.00 0.00 00:12:04.787 =================================================================================================================== 00:12:04.787 Total : 15343.00 59.93 0.00 0.00 0.00 0.00 0.00 00:12:04.787 00:12:05.044 true 00:12:05.044 01:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:05.044 01:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:05.302 01:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:05.302 01:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:05.302 01:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4112860 00:12:05.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.868 Nvme0n1 : 3.00 15418.00 60.23 0.00 0.00 0.00 0.00 0.00 00:12:05.868 =================================================================================================================== 00:12:05.868 Total : 15418.00 60.23 0.00 0.00 0.00 0.00 0.00 00:12:05.868 00:12:06.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.804 Nvme0n1 : 4.00 15516.75 60.61 0.00 0.00 0.00 0.00 0.00 00:12:06.804 =================================================================================================================== 00:12:06.804 Total : 15516.75 60.61 0.00 0.00 0.00 0.00 0.00 00:12:06.804 00:12:07.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.797 Nvme0n1 : 5.00 15589.00 60.89 0.00 0.00 0.00 0.00 0.00 00:12:07.798 =================================================================================================================== 00:12:07.798 Total : 15589.00 60.89 0.00 0.00 0.00 0.00 0.00 00:12:07.798 00:12:08.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.729 Nvme0n1 : 6.00 15658.33 61.17 0.00 0.00 0.00 0.00 0.00 00:12:08.729 =================================================================================================================== 00:12:08.729 Total : 15658.33 61.17 0.00 0.00 0.00 0.00 0.00 00:12:08.729 00:12:10.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.099 Nvme0n1 : 7.00 15689.29 61.29 0.00 0.00 0.00 0.00 0.00 00:12:10.099 =================================================================================================================== 00:12:10.099 Total : 15689.29 61.29 0.00 0.00 0.00 0.00 0.00 00:12:10.099 00:12:11.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.030 Nvme0n1 : 8.00 15728.75 61.44 0.00 0.00 0.00 0.00 0.00 00:12:11.030 =================================================================================================================== 00:12:11.030 Total : 15728.75 61.44 0.00 0.00 0.00 0.00 0.00 00:12:11.030 00:12:11.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.964 Nvme0n1 : 9.00 15759.11 61.56 0.00 0.00 0.00 0.00 0.00 00:12:11.964 =================================================================================================================== 00:12:11.964 Total : 15759.11 61.56 0.00 0.00 0.00 0.00 0.00 00:12:11.964 00:12:12.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.897 Nvme0n1 : 10.00 15777.80 61.63 0.00 0.00 0.00 0.00 0.00 00:12:12.897 =================================================================================================================== 00:12:12.897 Total : 15777.80 61.63 0.00 0.00 0.00 0.00 0.00 00:12:12.897 00:12:12.897 00:12:12.897 Latency(us) 00:12:12.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.897 Nvme0n1 : 10.00 15783.72 61.66 0.00 0.00 8104.79 5048.70 17670.45 00:12:12.897 =================================================================================================================== 00:12:12.897 Total : 15783.72 61.66 0.00 0.00 8104.79 5048.70 17670.45 00:12:12.897 0 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4112724 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4112724 ']' 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4112724 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4112724 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4112724' 00:12:12.897 killing process with pid 4112724 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4112724 00:12:12.897 Received shutdown signal, test time was about 10.000000 seconds 00:12:12.897 00:12:12.897 Latency(us) 00:12:12.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.897 =================================================================================================================== 00:12:12.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:12.897 01:05:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4112724 00:12:13.154 01:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:13.411 01:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:13.668 01:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:13.668 01:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:13.926 01:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:13.926 01:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:13.926 01:05:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:14.201 [2024-07-16 01:05:30.115723] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:14.201 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:14.467 request: 00:12:14.467 { 00:12:14.467 "uuid": "dc8ae378-3a4f-4b19-97c9-996097052e94", 00:12:14.467 "method": "bdev_lvol_get_lvstores", 00:12:14.467 "req_id": 1 00:12:14.467 } 00:12:14.467 Got JSON-RPC error response 00:12:14.467 response: 00:12:14.467 { 00:12:14.467 "code": -19, 00:12:14.467 "message": "No such device" 00:12:14.467 } 00:12:14.467 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:14.467 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:14.467 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:14.467 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:14.467 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:14.724 aio_bdev 00:12:14.724 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bfc4a3ce-8785-4017-abab-6c72fa16739e 00:12:14.724 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=bfc4a3ce-8785-4017-abab-6c72fa16739e 00:12:14.724 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:14.724 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:14.724 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:14.724 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:14.724 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:14.981 01:05:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bfc4a3ce-8785-4017-abab-6c72fa16739e -t 2000 00:12:15.242 [ 00:12:15.242 { 00:12:15.242 "name": "bfc4a3ce-8785-4017-abab-6c72fa16739e", 00:12:15.242 "aliases": [ 00:12:15.242 "lvs/lvol" 00:12:15.242 ], 00:12:15.242 "product_name": "Logical Volume", 00:12:15.242 "block_size": 4096, 00:12:15.242 "num_blocks": 38912, 00:12:15.242 "uuid": "bfc4a3ce-8785-4017-abab-6c72fa16739e", 00:12:15.242 "assigned_rate_limits": { 00:12:15.242 "rw_ios_per_sec": 0, 00:12:15.242 "rw_mbytes_per_sec": 0, 00:12:15.242 "r_mbytes_per_sec": 0, 00:12:15.242 "w_mbytes_per_sec": 0 00:12:15.242 }, 00:12:15.242 "claimed": false, 00:12:15.242 "zoned": false, 00:12:15.242 "supported_io_types": { 00:12:15.242 "read": true, 00:12:15.242 "write": true, 00:12:15.242 "unmap": true, 00:12:15.242 "flush": false, 00:12:15.242 "reset": true, 00:12:15.242 "nvme_admin": false, 00:12:15.242 "nvme_io": false, 00:12:15.242 "nvme_io_md": false, 00:12:15.242 "write_zeroes": true, 00:12:15.242 "zcopy": false, 00:12:15.242 "get_zone_info": false, 00:12:15.242 "zone_management": false, 00:12:15.242 "zone_append": false, 00:12:15.242 "compare": false, 00:12:15.242 "compare_and_write": false, 00:12:15.242 "abort": false, 00:12:15.242 "seek_hole": true, 00:12:15.242 "seek_data": true, 00:12:15.242 "copy": false, 00:12:15.242 "nvme_iov_md": false 00:12:15.242 }, 00:12:15.242 "driver_specific": { 00:12:15.242 "lvol": { 00:12:15.242 "lvol_store_uuid": "dc8ae378-3a4f-4b19-97c9-996097052e94", 00:12:15.242 "base_bdev": "aio_bdev", 00:12:15.242 "thin_provision": false, 00:12:15.242 "num_allocated_clusters": 38, 00:12:15.242 "snapshot": false, 00:12:15.242 "clone": false, 00:12:15.242 "esnap_clone": false 00:12:15.242 } 00:12:15.242 } 00:12:15.242 } 00:12:15.242 ] 00:12:15.242 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:15.242 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:15.242 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:15.501 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:15.501 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:15.501 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:15.758 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:15.758 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bfc4a3ce-8785-4017-abab-6c72fa16739e 00:12:16.015 01:05:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc8ae378-3a4f-4b19-97c9-996097052e94 00:12:16.271 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:16.528 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.528 00:12:16.528 real 0m17.517s 00:12:16.528 user 0m16.995s 00:12:16.528 sys 0m1.962s 00:12:16.528 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.528 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:16.528 ************************************ 00:12:16.528 END TEST lvs_grow_clean 00:12:16.528 ************************************ 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:16.785 ************************************ 00:12:16.785 START TEST lvs_grow_dirty 00:12:16.785 ************************************ 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.785 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:17.042 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:17.042 01:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:17.299 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:17.299 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:17.299 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:17.555 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:17.555 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:17.555 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b lvol 150 00:12:17.812 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=16f70108-9c17-45f5-9b1f-66d1baecba71 00:12:17.812 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:17.812 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:18.069 [2024-07-16 01:05:33.836053] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:18.069 [2024-07-16 01:05:33.836134] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:18.069 true 00:12:18.069 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:18.069 01:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:18.327 01:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:18.327 01:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:18.583 01:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16f70108-9c17-45f5-9b1f-66d1baecba71 00:12:18.840 01:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:19.112 [2024-07-16 01:05:34.843091] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.112 01:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4114891 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4114891 /var/tmp/bdevperf.sock 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4114891 ']' 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.112 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:19.368 [2024-07-16 01:05:35.150402] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:19.368 [2024-07-16 01:05:35.150492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114891 ] 00:12:19.368 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.368 [2024-07-16 01:05:35.215738] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.368 [2024-07-16 01:05:35.327308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.625 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.625 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:19.625 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:19.882 Nvme0n1 00:12:19.882 01:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:20.139 [ 00:12:20.139 { 00:12:20.139 "name": "Nvme0n1", 00:12:20.139 "aliases": [ 00:12:20.139 "16f70108-9c17-45f5-9b1f-66d1baecba71" 00:12:20.139 ], 00:12:20.139 "product_name": "NVMe disk", 00:12:20.139 "block_size": 4096, 00:12:20.139 "num_blocks": 38912, 00:12:20.139 "uuid": "16f70108-9c17-45f5-9b1f-66d1baecba71", 00:12:20.139 "assigned_rate_limits": { 00:12:20.139 "rw_ios_per_sec": 0, 00:12:20.139 "rw_mbytes_per_sec": 0, 00:12:20.139 "r_mbytes_per_sec": 0, 00:12:20.139 "w_mbytes_per_sec": 0 00:12:20.139 }, 00:12:20.139 "claimed": false, 00:12:20.139 "zoned": false, 00:12:20.139 "supported_io_types": { 00:12:20.139 "read": true, 00:12:20.139 "write": true, 00:12:20.139 "unmap": true, 00:12:20.139 "flush": true, 00:12:20.139 "reset": true, 00:12:20.139 "nvme_admin": true, 00:12:20.139 "nvme_io": true, 00:12:20.139 "nvme_io_md": false, 00:12:20.139 "write_zeroes": true, 00:12:20.139 "zcopy": false, 00:12:20.139 "get_zone_info": false, 00:12:20.139 "zone_management": false, 00:12:20.139 "zone_append": false, 00:12:20.139 "compare": true, 00:12:20.139 "compare_and_write": true, 00:12:20.139 "abort": true, 00:12:20.139 "seek_hole": false, 00:12:20.139 "seek_data": false, 00:12:20.139 "copy": true, 00:12:20.139 "nvme_iov_md": false 00:12:20.139 }, 00:12:20.139 "memory_domains": [ 00:12:20.139 { 00:12:20.139 "dma_device_id": "system", 00:12:20.139 "dma_device_type": 1 00:12:20.139 } 00:12:20.139 ], 00:12:20.139 "driver_specific": { 00:12:20.139 "nvme": [ 00:12:20.139 { 00:12:20.139 "trid": { 00:12:20.139 "trtype": "TCP", 00:12:20.139 "adrfam": "IPv4", 00:12:20.139 "traddr": "10.0.0.2", 00:12:20.139 "trsvcid": "4420", 00:12:20.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:20.139 }, 00:12:20.139 "ctrlr_data": { 00:12:20.139 "cntlid": 1, 00:12:20.139 "vendor_id": "0x8086", 00:12:20.139 "model_number": "SPDK bdev Controller", 00:12:20.139 "serial_number": "SPDK0", 00:12:20.139 "firmware_revision": "24.09", 00:12:20.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:20.139 "oacs": { 00:12:20.139 "security": 0, 00:12:20.139 "format": 0, 00:12:20.139 "firmware": 0, 00:12:20.139 "ns_manage": 0 00:12:20.139 }, 00:12:20.139 "multi_ctrlr": true, 00:12:20.139 "ana_reporting": false 00:12:20.139 }, 00:12:20.139 "vs": { 00:12:20.139 "nvme_version": "1.3" 00:12:20.139 }, 00:12:20.139 "ns_data": { 00:12:20.139 "id": 1, 00:12:20.139 "can_share": true 00:12:20.139 } 00:12:20.139 } 00:12:20.139 ], 00:12:20.139 "mp_policy": "active_passive" 00:12:20.139 } 00:12:20.139 } 00:12:20.139 ] 00:12:20.139 01:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4115027 00:12:20.139 01:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:20.139 01:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:20.396 Running I/O for 10 seconds... 00:12:21.330 Latency(us) 00:12:21.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.330 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:12:21.330 =================================================================================================================== 00:12:21.330 Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:12:21.330 00:12:22.263 01:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:22.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.263 Nvme0n1 : 2.00 15527.00 60.65 0.00 0.00 0.00 0.00 0.00 00:12:22.263 =================================================================================================================== 00:12:22.263 Total : 15527.00 60.65 0.00 0.00 0.00 0.00 0.00 00:12:22.263 00:12:22.521 true 00:12:22.521 01:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:22.521 01:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:22.779 01:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:22.779 01:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:22.779 01:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4115027 00:12:23.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.344 Nvme0n1 : 3.00 15600.67 60.94 0.00 0.00 0.00 0.00 0.00 00:12:23.344 =================================================================================================================== 00:12:23.344 Total : 15600.67 60.94 0.00 0.00 0.00 0.00 0.00 00:12:23.344 00:12:24.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.308 Nvme0n1 : 4.00 15654.50 61.15 0.00 0.00 0.00 0.00 0.00 00:12:24.308 =================================================================================================================== 00:12:24.308 Total : 15654.50 61.15 0.00 0.00 0.00 0.00 0.00 00:12:24.308 00:12:25.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.242 Nvme0n1 : 5.00 15724.00 61.42 0.00 0.00 0.00 0.00 0.00 00:12:25.242 =================================================================================================================== 00:12:25.242 Total : 15724.00 61.42 0.00 0.00 0.00 0.00 0.00 00:12:25.242 00:12:26.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.617 Nvme0n1 : 6.00 15802.67 61.73 0.00 0.00 0.00 0.00 0.00 00:12:26.617 =================================================================================================================== 00:12:26.617 Total : 15802.67 61.73 0.00 0.00 0.00 0.00 0.00 00:12:26.617 00:12:27.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.550 Nvme0n1 : 7.00 15841.00 61.88 0.00 0.00 0.00 0.00 0.00 00:12:27.550 =================================================================================================================== 00:12:27.550 Total : 15841.00 61.88 0.00 0.00 0.00 0.00 0.00 00:12:27.550 00:12:28.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.482 Nvme0n1 : 8.00 15885.12 62.05 0.00 0.00 0.00 0.00 0.00 00:12:28.482 =================================================================================================================== 00:12:28.482 Total : 15885.12 62.05 0.00 0.00 0.00 0.00 0.00 00:12:28.482 00:12:29.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.415 Nvme0n1 : 9.00 15926.33 62.21 0.00 0.00 0.00 0.00 0.00 00:12:29.415 =================================================================================================================== 00:12:29.415 Total : 15926.33 62.21 0.00 0.00 0.00 0.00 0.00 00:12:29.415 00:12:30.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.348 Nvme0n1 : 10.00 15946.60 62.29 0.00 0.00 0.00 0.00 0.00 00:12:30.348 =================================================================================================================== 00:12:30.348 Total : 15946.60 62.29 0.00 0.00 0.00 0.00 0.00 00:12:30.348 00:12:30.348 00:12:30.348 Latency(us) 00:12:30.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.348 Nvme0n1 : 10.00 15953.50 62.32 0.00 0.00 8018.62 4538.97 15825.73 00:12:30.348 =================================================================================================================== 00:12:30.348 Total : 15953.50 62.32 0.00 0.00 8018.62 4538.97 15825.73 00:12:30.348 0 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4114891 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4114891 ']' 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4114891 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4114891 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4114891' 00:12:30.348 killing process with pid 4114891 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4114891 00:12:30.348 Received shutdown signal, test time was about 10.000000 seconds 00:12:30.348 00:12:30.348 Latency(us) 00:12:30.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.348 =================================================================================================================== 00:12:30.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.348 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4114891 00:12:30.606 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:30.862 01:05:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4112294 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4112294 00:12:31.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4112294 Killed "${NVMF_APP[@]}" "$@" 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4116265 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4116265 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4116265 ']' 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.428 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:31.687 [2024-07-16 01:05:47.457634] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:31.687 [2024-07-16 01:05:47.457713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.687 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.687 [2024-07-16 01:05:47.522851] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.687 [2024-07-16 01:05:47.632989] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.687 [2024-07-16 01:05:47.633054] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.687 [2024-07-16 01:05:47.633067] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.687 [2024-07-16 01:05:47.633079] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.687 [2024-07-16 01:05:47.633089] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.687 [2024-07-16 01:05:47.633116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.945 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.945 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:31.945 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.945 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.945 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.945 01:05:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:32.203 [2024-07-16 01:05:48.045740] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:32.203 [2024-07-16 01:05:48.045853] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:32.203 [2024-07-16 01:05:48.045899] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 16f70108-9c17-45f5-9b1f-66d1baecba71 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=16f70108-9c17-45f5-9b1f-66d1baecba71 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:32.203 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:32.461 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16f70108-9c17-45f5-9b1f-66d1baecba71 -t 2000 00:12:32.718 [ 00:12:32.718 { 00:12:32.718 "name": "16f70108-9c17-45f5-9b1f-66d1baecba71", 00:12:32.718 "aliases": [ 00:12:32.718 "lvs/lvol" 00:12:32.718 ], 00:12:32.718 "product_name": "Logical Volume", 00:12:32.718 "block_size": 4096, 00:12:32.718 "num_blocks": 38912, 00:12:32.718 "uuid": "16f70108-9c17-45f5-9b1f-66d1baecba71", 00:12:32.718 "assigned_rate_limits": { 00:12:32.718 "rw_ios_per_sec": 0, 00:12:32.718 "rw_mbytes_per_sec": 0, 00:12:32.718 "r_mbytes_per_sec": 0, 00:12:32.718 "w_mbytes_per_sec": 0 00:12:32.718 }, 00:12:32.718 "claimed": false, 00:12:32.718 "zoned": false, 00:12:32.718 "supported_io_types": { 00:12:32.718 "read": true, 00:12:32.718 "write": true, 00:12:32.718 "unmap": true, 00:12:32.718 "flush": false, 00:12:32.718 "reset": true, 00:12:32.718 "nvme_admin": false, 00:12:32.718 "nvme_io": false, 00:12:32.718 "nvme_io_md": false, 00:12:32.718 "write_zeroes": true, 00:12:32.718 "zcopy": false, 00:12:32.718 "get_zone_info": false, 00:12:32.718 "zone_management": false, 00:12:32.718 "zone_append": false, 00:12:32.718 "compare": false, 00:12:32.718 "compare_and_write": false, 00:12:32.718 "abort": false, 00:12:32.718 "seek_hole": true, 00:12:32.718 "seek_data": true, 00:12:32.718 "copy": false, 00:12:32.718 "nvme_iov_md": false 00:12:32.718 }, 00:12:32.718 "driver_specific": { 00:12:32.718 "lvol": { 00:12:32.718 "lvol_store_uuid": "3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b", 00:12:32.718 "base_bdev": "aio_bdev", 00:12:32.718 "thin_provision": false, 00:12:32.718 "num_allocated_clusters": 38, 00:12:32.718 "snapshot": false, 00:12:32.718 "clone": false, 00:12:32.718 "esnap_clone": false 00:12:32.718 } 00:12:32.718 } 00:12:32.718 } 00:12:32.718 ] 00:12:32.718 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:32.718 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:32.718 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:32.976 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:32.976 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:32.976 01:05:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:33.234 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:33.234 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:33.492 [2024-07-16 01:05:49.327204] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:33.492 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:33.750 request: 00:12:33.750 { 00:12:33.750 "uuid": "3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b", 00:12:33.750 "method": "bdev_lvol_get_lvstores", 00:12:33.750 "req_id": 1 00:12:33.750 } 00:12:33.750 Got JSON-RPC error response 00:12:33.750 response: 00:12:33.750 { 00:12:33.750 "code": -19, 00:12:33.750 "message": "No such device" 00:12:33.750 } 00:12:33.750 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:33.750 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:33.750 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:33.750 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:33.750 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:34.008 aio_bdev 00:12:34.008 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 16f70108-9c17-45f5-9b1f-66d1baecba71 00:12:34.008 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=16f70108-9c17-45f5-9b1f-66d1baecba71 00:12:34.008 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:34.008 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:34.008 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:34.008 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:34.008 01:05:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:34.266 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16f70108-9c17-45f5-9b1f-66d1baecba71 -t 2000 00:12:34.523 [ 00:12:34.523 { 00:12:34.523 "name": "16f70108-9c17-45f5-9b1f-66d1baecba71", 00:12:34.523 "aliases": [ 00:12:34.523 "lvs/lvol" 00:12:34.523 ], 00:12:34.523 "product_name": "Logical Volume", 00:12:34.523 "block_size": 4096, 00:12:34.523 "num_blocks": 38912, 00:12:34.523 "uuid": "16f70108-9c17-45f5-9b1f-66d1baecba71", 00:12:34.523 "assigned_rate_limits": { 00:12:34.523 "rw_ios_per_sec": 0, 00:12:34.523 "rw_mbytes_per_sec": 0, 00:12:34.523 "r_mbytes_per_sec": 0, 00:12:34.523 "w_mbytes_per_sec": 0 00:12:34.523 }, 00:12:34.523 "claimed": false, 00:12:34.523 "zoned": false, 00:12:34.523 "supported_io_types": { 00:12:34.523 "read": true, 00:12:34.523 "write": true, 00:12:34.523 "unmap": true, 00:12:34.523 "flush": false, 00:12:34.523 "reset": true, 00:12:34.523 "nvme_admin": false, 00:12:34.523 "nvme_io": false, 00:12:34.523 "nvme_io_md": false, 00:12:34.523 "write_zeroes": true, 00:12:34.523 "zcopy": false, 00:12:34.523 "get_zone_info": false, 00:12:34.523 "zone_management": false, 00:12:34.523 "zone_append": false, 00:12:34.523 "compare": false, 00:12:34.523 "compare_and_write": false, 00:12:34.523 "abort": false, 00:12:34.523 "seek_hole": true, 00:12:34.523 "seek_data": true, 00:12:34.523 "copy": false, 00:12:34.523 "nvme_iov_md": false 00:12:34.523 }, 00:12:34.523 "driver_specific": { 00:12:34.523 "lvol": { 00:12:34.523 "lvol_store_uuid": "3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b", 00:12:34.523 "base_bdev": "aio_bdev", 00:12:34.523 "thin_provision": false, 00:12:34.523 "num_allocated_clusters": 38, 00:12:34.523 "snapshot": false, 00:12:34.523 "clone": false, 00:12:34.523 "esnap_clone": false 00:12:34.523 } 00:12:34.524 } 00:12:34.524 } 00:12:34.524 ] 00:12:34.524 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:34.524 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:34.524 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:34.780 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:34.780 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:34.780 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:35.038 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:35.038 01:05:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16f70108-9c17-45f5-9b1f-66d1baecba71 00:12:35.296 01:05:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ff4a8f8-bac6-4d1a-97f9-e9782e5d656b 00:12:35.553 01:05:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:35.811 00:12:35.811 real 0m19.080s 00:12:35.811 user 0m48.210s 00:12:35.811 sys 0m4.778s 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:35.811 ************************************ 00:12:35.811 END TEST lvs_grow_dirty 00:12:35.811 ************************************ 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:35.811 nvmf_trace.0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.811 rmmod nvme_tcp 00:12:35.811 rmmod nvme_fabrics 00:12:35.811 rmmod nvme_keyring 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4116265 ']' 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4116265 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4116265 ']' 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4116265 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4116265 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4116265' 00:12:35.811 killing process with pid 4116265 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4116265 00:12:35.811 01:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4116265 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.070 01:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.601 01:05:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:38.602 00:12:38.602 real 0m41.961s 00:12:38.602 user 1m10.884s 00:12:38.602 sys 0m8.641s 00:12:38.602 01:05:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.602 01:05:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:38.602 ************************************ 00:12:38.602 END TEST nvmf_lvs_grow 00:12:38.602 ************************************ 00:12:38.602 01:05:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:38.602 01:05:54 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:38.602 01:05:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:38.602 01:05:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.602 01:05:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:38.602 ************************************ 00:12:38.602 START TEST nvmf_bdev_io_wait 00:12:38.602 ************************************ 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:38.602 * Looking for test storage... 00:12:38.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.602 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.603 01:05:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.505 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.505 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:40.505 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:40.505 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:40.505 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:40.506 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:40.506 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:40.506 Found net devices under 0000:09:00.0: cvl_0_0 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:40.506 Found net devices under 0000:09:00.1: cvl_0_1 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:40.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:12:40.506 00:12:40.506 --- 10.0.0.2 ping statistics --- 00:12:40.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.506 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:12:40.506 00:12:40.506 --- 10.0.0.1 ping statistics --- 00:12:40.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.506 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4118765 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4118765 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4118765 ']' 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.506 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.765 [2024-07-16 01:05:56.506501] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:40.765 [2024-07-16 01:05:56.506602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.765 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.765 [2024-07-16 01:05:56.571032] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.765 [2024-07-16 01:05:56.678293] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.765 [2024-07-16 01:05:56.678377] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.765 [2024-07-16 01:05:56.678402] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.765 [2024-07-16 01:05:56.678413] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.765 [2024-07-16 01:05:56.678422] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.765 [2024-07-16 01:05:56.678588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.765 [2024-07-16 01:05:56.678686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.765 [2024-07-16 01:05:56.678776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.765 [2024-07-16 01:05:56.678794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.765 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.025 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.025 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.025 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.025 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.025 [2024-07-16 01:05:56.822925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.025 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.026 Malloc0 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.026 [2024-07-16 01:05:56.883821] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4118910 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4118912 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4118914 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:41.026 { 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme$subsystem", 00:12:41.026 "trtype": "$TEST_TRANSPORT", 00:12:41.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "$NVMF_PORT", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.026 "hdgst": ${hdgst:-false}, 00:12:41.026 "ddgst": ${ddgst:-false} 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 } 00:12:41.026 EOF 00:12:41.026 )") 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4118916 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:41.026 { 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme$subsystem", 00:12:41.026 "trtype": "$TEST_TRANSPORT", 00:12:41.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "$NVMF_PORT", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.026 "hdgst": ${hdgst:-false}, 00:12:41.026 "ddgst": ${ddgst:-false} 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 } 00:12:41.026 EOF 00:12:41.026 )") 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:41.026 { 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme$subsystem", 00:12:41.026 "trtype": "$TEST_TRANSPORT", 00:12:41.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "$NVMF_PORT", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.026 "hdgst": ${hdgst:-false}, 00:12:41.026 "ddgst": ${ddgst:-false} 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 } 00:12:41.026 EOF 00:12:41.026 )") 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:41.026 { 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme$subsystem", 00:12:41.026 "trtype": "$TEST_TRANSPORT", 00:12:41.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "$NVMF_PORT", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.026 "hdgst": ${hdgst:-false}, 00:12:41.026 "ddgst": ${ddgst:-false} 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 } 00:12:41.026 EOF 00:12:41.026 )") 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4118910 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme1", 00:12:41.026 "trtype": "tcp", 00:12:41.026 "traddr": "10.0.0.2", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "4420", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.026 "hdgst": false, 00:12:41.026 "ddgst": false 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 }' 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme1", 00:12:41.026 "trtype": "tcp", 00:12:41.026 "traddr": "10.0.0.2", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "4420", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.026 "hdgst": false, 00:12:41.026 "ddgst": false 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 }' 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme1", 00:12:41.026 "trtype": "tcp", 00:12:41.026 "traddr": "10.0.0.2", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "4420", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.026 "hdgst": false, 00:12:41.026 "ddgst": false 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 }' 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:41.026 01:05:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:41.026 "params": { 00:12:41.026 "name": "Nvme1", 00:12:41.026 "trtype": "tcp", 00:12:41.026 "traddr": "10.0.0.2", 00:12:41.026 "adrfam": "ipv4", 00:12:41.026 "trsvcid": "4420", 00:12:41.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.026 "hdgst": false, 00:12:41.026 "ddgst": false 00:12:41.026 }, 00:12:41.026 "method": "bdev_nvme_attach_controller" 00:12:41.026 }' 00:12:41.026 [2024-07-16 01:05:56.933108] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:41.026 [2024-07-16 01:05:56.933117] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:41.026 [2024-07-16 01:05:56.933117] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:41.026 [2024-07-16 01:05:56.933129] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:41.026 [2024-07-16 01:05:56.933184] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:41.027 [2024-07-16 01:05:56.933202] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 01:05:56.933203] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 01:05:56.933203] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:41.027 --proc-type=auto ] 00:12:41.027 --proc-type=auto ] 00:12:41.027 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.350 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.350 [2024-07-16 01:05:57.108299] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.350 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.350 [2024-07-16 01:05:57.206119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:41.350 [2024-07-16 01:05:57.209098] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.350 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.350 [2024-07-16 01:05:57.307767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:41.350 [2024-07-16 01:05:57.326910] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.609 [2024-07-16 01:05:57.380005] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.609 [2024-07-16 01:05:57.457203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:41.609 [2024-07-16 01:05:57.476728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:41.609 Running I/O for 1 seconds... 00:12:41.868 Running I/O for 1 seconds... 00:12:41.868 Running I/O for 1 seconds... 00:12:41.868 Running I/O for 1 seconds... 00:12:42.801 00:12:42.801 Latency(us) 00:12:42.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.801 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:42.801 Nvme1n1 : 1.02 7360.90 28.75 0.00 0.00 17163.89 6747.78 28738.75 00:12:42.801 =================================================================================================================== 00:12:42.801 Total : 7360.90 28.75 0.00 0.00 17163.89 6747.78 28738.75 00:12:42.801 00:12:42.801 Latency(us) 00:12:42.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.801 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:42.801 Nvme1n1 : 1.00 198376.72 774.91 0.00 0.00 642.66 259.41 867.75 00:12:42.801 =================================================================================================================== 00:12:42.801 Total : 198376.72 774.91 0.00 0.00 642.66 259.41 867.75 00:12:42.801 00:12:42.801 Latency(us) 00:12:42.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.801 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:42.801 Nvme1n1 : 1.01 7559.94 29.53 0.00 0.00 16840.53 9757.58 33981.63 00:12:42.801 =================================================================================================================== 00:12:42.801 Total : 7559.94 29.53 0.00 0.00 16840.53 9757.58 33981.63 00:12:42.801 00:12:42.801 Latency(us) 00:12:42.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.801 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:42.801 Nvme1n1 : 1.01 7379.04 28.82 0.00 0.00 17293.92 5024.43 43884.85 00:12:42.801 =================================================================================================================== 00:12:42.801 Total : 7379.04 28.82 0.00 0.00 17293.92 5024.43 43884.85 00:12:43.059 01:05:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4118912 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4118914 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4118916 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.059 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.059 rmmod nvme_tcp 00:12:43.316 rmmod nvme_fabrics 00:12:43.317 rmmod nvme_keyring 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4118765 ']' 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4118765 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4118765 ']' 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4118765 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4118765 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4118765' 00:12:43.317 killing process with pid 4118765 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4118765 00:12:43.317 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4118765 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.576 01:05:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.479 01:06:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.479 00:12:45.479 real 0m7.309s 00:12:45.479 user 0m16.663s 00:12:45.479 sys 0m3.527s 00:12:45.480 01:06:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.480 01:06:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:45.480 ************************************ 00:12:45.480 END TEST nvmf_bdev_io_wait 00:12:45.480 ************************************ 00:12:45.480 01:06:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:45.480 01:06:01 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:45.480 01:06:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:45.480 01:06:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.480 01:06:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.738 ************************************ 00:12:45.738 START TEST nvmf_queue_depth 00:12:45.738 ************************************ 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:45.738 * Looking for test storage... 00:12:45.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.738 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.739 01:06:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:47.641 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:47.641 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:47.641 Found net devices under 0000:09:00.0: cvl_0_0 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:47.641 Found net devices under 0000:09:00.1: cvl_0_1 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.641 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:12:47.899 00:12:47.899 --- 10.0.0.2 ping statistics --- 00:12:47.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.899 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:12:47.899 00:12:47.899 --- 10.0.0.1 ping statistics --- 00:12:47.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.899 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4121247 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4121247 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4121247 ']' 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.899 01:06:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:47.899 [2024-07-16 01:06:03.787371] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:47.899 [2024-07-16 01:06:03.787463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.899 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.899 [2024-07-16 01:06:03.851425] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.157 [2024-07-16 01:06:03.956720] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.157 [2024-07-16 01:06:03.956794] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.157 [2024-07-16 01:06:03.956819] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.157 [2024-07-16 01:06:03.956830] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.157 [2024-07-16 01:06:03.956854] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.157 [2024-07-16 01:06:03.956890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.157 [2024-07-16 01:06:04.107895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.157 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.414 Malloc0 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.414 [2024-07-16 01:06:04.171446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4121273 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4121273 /var/tmp/bdevperf.sock 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4121273 ']' 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.414 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.414 [2024-07-16 01:06:04.216627] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:12:48.414 [2024-07-16 01:06:04.216694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4121273 ] 00:12:48.414 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.414 [2024-07-16 01:06:04.275688] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.414 [2024-07-16 01:06:04.383308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.672 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.672 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:48.672 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:48.672 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.673 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:48.673 NVMe0n1 00:12:48.673 01:06:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.673 01:06:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:48.673 Running I/O for 10 seconds... 00:13:00.867 00:13:00.867 Latency(us) 00:13:00.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.867 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:00.867 Verification LBA range: start 0x0 length 0x4000 00:13:00.867 NVMe0n1 : 10.08 8776.86 34.28 0.00 0.00 116083.13 23010.42 69516.71 00:13:00.867 =================================================================================================================== 00:13:00.867 Total : 8776.86 34.28 0.00 0.00 116083.13 23010.42 69516.71 00:13:00.867 0 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4121273 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4121273 ']' 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4121273 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4121273 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4121273' 00:13:00.867 killing process with pid 4121273 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4121273 00:13:00.867 Received shutdown signal, test time was about 10.000000 seconds 00:13:00.867 00:13:00.867 Latency(us) 00:13:00.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.867 =================================================================================================================== 00:13:00.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:00.867 01:06:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4121273 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:00.867 rmmod nvme_tcp 00:13:00.867 rmmod nvme_fabrics 00:13:00.867 rmmod nvme_keyring 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4121247 ']' 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4121247 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4121247 ']' 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4121247 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4121247 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4121247' 00:13:00.867 killing process with pid 4121247 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4121247 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4121247 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.867 01:06:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.801 01:06:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.801 00:13:01.801 real 0m16.025s 00:13:01.801 user 0m21.631s 00:13:01.801 sys 0m3.464s 00:13:01.801 01:06:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.801 01:06:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.801 ************************************ 00:13:01.801 END TEST nvmf_queue_depth 00:13:01.801 ************************************ 00:13:01.801 01:06:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:01.801 01:06:17 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:01.801 01:06:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:01.801 01:06:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.801 01:06:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.801 ************************************ 00:13:01.801 START TEST nvmf_target_multipath 00:13:01.801 ************************************ 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:01.801 * Looking for test storage... 00:13:01.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.801 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.802 01:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:04.332 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:04.332 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:04.332 Found net devices under 0000:09:00.0: cvl_0_0 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:04.332 Found net devices under 0000:09:00.1: cvl_0_1 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:13:04.332 00:13:04.332 --- 10.0.0.2 ping statistics --- 00:13:04.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.332 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:13:04.332 00:13:04.332 --- 10.0.0.1 ping statistics --- 00:13:04.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.332 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:04.332 only one NIC for nvmf test 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:04.332 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.333 rmmod nvme_tcp 00:13:04.333 rmmod nvme_fabrics 00:13:04.333 rmmod nvme_keyring 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.333 01:06:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.236 00:13:06.236 real 0m4.419s 00:13:06.236 user 0m0.833s 00:13:06.236 sys 0m1.583s 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.236 01:06:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:06.236 ************************************ 00:13:06.236 END TEST nvmf_target_multipath 00:13:06.236 ************************************ 00:13:06.236 01:06:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:06.236 01:06:21 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:06.236 01:06:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:06.236 01:06:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.236 01:06:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.236 ************************************ 00:13:06.236 START TEST nvmf_zcopy 00:13:06.236 ************************************ 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:06.236 * Looking for test storage... 00:13:06.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.236 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.237 01:06:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:08.766 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:08.766 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:08.766 Found net devices under 0000:09:00.0: cvl_0_0 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.766 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:08.767 Found net devices under 0000:09:00.1: cvl_0_1 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:13:08.767 00:13:08.767 --- 10.0.0.2 ping statistics --- 00:13:08.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.767 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:13:08.767 00:13:08.767 --- 10.0.0.1 ping statistics --- 00:13:08.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.767 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4126952 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4126952 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 4126952 ']' 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.767 [2024-07-16 01:06:24.434312] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:13:08.767 [2024-07-16 01:06:24.434399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.767 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.767 [2024-07-16 01:06:24.498405] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.767 [2024-07-16 01:06:24.599142] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.767 [2024-07-16 01:06:24.599200] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.767 [2024-07-16 01:06:24.599222] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.767 [2024-07-16 01:06:24.599232] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.767 [2024-07-16 01:06:24.599242] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.767 [2024-07-16 01:06:24.599267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.767 [2024-07-16 01:06:24.744139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.767 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.027 [2024-07-16 01:06:24.760362] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.027 malloc0 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:09.027 { 00:13:09.027 "params": { 00:13:09.027 "name": "Nvme$subsystem", 00:13:09.027 "trtype": "$TEST_TRANSPORT", 00:13:09.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:09.027 "adrfam": "ipv4", 00:13:09.027 "trsvcid": "$NVMF_PORT", 00:13:09.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:09.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:09.027 "hdgst": ${hdgst:-false}, 00:13:09.027 "ddgst": ${ddgst:-false} 00:13:09.027 }, 00:13:09.027 "method": "bdev_nvme_attach_controller" 00:13:09.027 } 00:13:09.027 EOF 00:13:09.027 )") 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:09.027 01:06:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:09.027 "params": { 00:13:09.027 "name": "Nvme1", 00:13:09.027 "trtype": "tcp", 00:13:09.027 "traddr": "10.0.0.2", 00:13:09.027 "adrfam": "ipv4", 00:13:09.027 "trsvcid": "4420", 00:13:09.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:09.027 "hdgst": false, 00:13:09.027 "ddgst": false 00:13:09.027 }, 00:13:09.027 "method": "bdev_nvme_attach_controller" 00:13:09.027 }' 00:13:09.027 [2024-07-16 01:06:24.846013] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:13:09.027 [2024-07-16 01:06:24.846092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126975 ] 00:13:09.027 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.027 [2024-07-16 01:06:24.909672] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.027 [2024-07-16 01:06:25.020404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.626 Running I/O for 10 seconds... 00:13:19.588 00:13:19.588 Latency(us) 00:13:19.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.588 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:19.588 Verification LBA range: start 0x0 length 0x1000 00:13:19.588 Nvme1n1 : 10.02 5938.71 46.40 0.00 0.00 21494.88 3301.07 32234.00 00:13:19.588 =================================================================================================================== 00:13:19.588 Total : 5938.71 46.40 0.00 0.00 21494.88 3301.07 32234.00 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4128287 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:19.846 { 00:13:19.846 "params": { 00:13:19.846 "name": "Nvme$subsystem", 00:13:19.846 "trtype": "$TEST_TRANSPORT", 00:13:19.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:19.846 "adrfam": "ipv4", 00:13:19.846 "trsvcid": "$NVMF_PORT", 00:13:19.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:19.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:19.846 "hdgst": ${hdgst:-false}, 00:13:19.846 "ddgst": ${ddgst:-false} 00:13:19.846 }, 00:13:19.846 "method": "bdev_nvme_attach_controller" 00:13:19.846 } 00:13:19.846 EOF 00:13:19.846 )") 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:19.846 [2024-07-16 01:06:35.627263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.627324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:19.846 01:06:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:19.846 "params": { 00:13:19.846 "name": "Nvme1", 00:13:19.846 "trtype": "tcp", 00:13:19.846 "traddr": "10.0.0.2", 00:13:19.846 "adrfam": "ipv4", 00:13:19.846 "trsvcid": "4420", 00:13:19.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:19.846 "hdgst": false, 00:13:19.846 "ddgst": false 00:13:19.846 }, 00:13:19.846 "method": "bdev_nvme_attach_controller" 00:13:19.846 }' 00:13:19.846 [2024-07-16 01:06:35.635194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.635219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 [2024-07-16 01:06:35.643212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.643248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 [2024-07-16 01:06:35.651247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.651269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 [2024-07-16 01:06:35.659272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.659308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 [2024-07-16 01:06:35.661828] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:13:19.846 [2024-07-16 01:06:35.661886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128287 ] 00:13:19.846 [2024-07-16 01:06:35.667290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.667325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 [2024-07-16 01:06:35.675328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.675349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 [2024-07-16 01:06:35.683349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.683369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.846 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.846 [2024-07-16 01:06:35.691368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.846 [2024-07-16 01:06:35.691389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.699386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.699406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.707407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.707433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.715430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.715465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.723449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.723469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.723792] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.847 [2024-07-16 01:06:35.731499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.731535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.739525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.739564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.747515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.747537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.755534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.755555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.763554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.763575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.771577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.771597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.779597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.779619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.787640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.787672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.795663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.795696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.803662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.803683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.811682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.811702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.819704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.819724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.827726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.827746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.835767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.847 [2024-07-16 01:06:35.835790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.847 [2024-07-16 01:06:35.838965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.106 [2024-07-16 01:06:35.843772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.843793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.851796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.851824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.859845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.859883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.867869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.867910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.875892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.875933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.883909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.883985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.891929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.891993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.899927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.899972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.908002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.908042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.916027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.916068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.924017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.924043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.932039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.932062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.940058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.940079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.948088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.948115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.956107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.956131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.964133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.964157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.972157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.972182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.106 [2024-07-16 01:06:35.980181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.106 [2024-07-16 01:06:35.980205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:35.988197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:35.988220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:35.996217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:35.996256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.004255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.004283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.012277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.012311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.020318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.020339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.028337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.028359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.036326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.036347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.044363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.044383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.052369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.052390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.060389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.060409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.068417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.068440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.076436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.076457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.084457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.084477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.107 [2024-07-16 01:06:36.092477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.107 [2024-07-16 01:06:36.092497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.366 [2024-07-16 01:06:36.100503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.366 [2024-07-16 01:06:36.100524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.366 [2024-07-16 01:06:36.108525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.366 [2024-07-16 01:06:36.108545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.366 [2024-07-16 01:06:36.116553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.366 [2024-07-16 01:06:36.116590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.366 [2024-07-16 01:06:36.124684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.366 [2024-07-16 01:06:36.124712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.366 [2024-07-16 01:06:36.132605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.366 [2024-07-16 01:06:36.132627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.366 Running I/O for 5 seconds... 00:13:20.366 [2024-07-16 01:06:36.140627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.366 [2024-07-16 01:06:36.140648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.155659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.155689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.166461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.166495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.179895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.179922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.192144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.192172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.201530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.201558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.213017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.213046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.223656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.223683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.234430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.234459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.246815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.246843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.257006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.257035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.268117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.268145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.281454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.281482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.291381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.291408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.301690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.301717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.312525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.312553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.325130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.325159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.335006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.335035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.345663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.345691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.367 [2024-07-16 01:06:36.356601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.367 [2024-07-16 01:06:36.356628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.369337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.369365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.379467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.379494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.389709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.389738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.400514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.400542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.412794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.412821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.422469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.422497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.434116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.434145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.444517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.444545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.455201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.455240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.466380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.466410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.476975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.477004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.487828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.487855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.498689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.498717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.511012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.511040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.519978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.520006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.532518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.532547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.543018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.543047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.553409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.553436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.563853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.563882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.574462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.574490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.587195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.587224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.597414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.597441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.607824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.607852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.625 [2024-07-16 01:06:36.618596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.625 [2024-07-16 01:06:36.618623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.629577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.629605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.641833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.641861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.651850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.651878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.662864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.662891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.675771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.675799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.685704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.685732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.696773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.696800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.707863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.707891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.718970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.718998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.729711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.729738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.740468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.740496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.753125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.753153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.763191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.763220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.773710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.773738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.784556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.784584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.795494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.795522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.807820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.807848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.817655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.817683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.828187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.828216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.838796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.838823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.849049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.849077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.859463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.859491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.884 [2024-07-16 01:06:36.869634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.884 [2024-07-16 01:06:36.869662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.142 [2024-07-16 01:06:36.880116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.142 [2024-07-16 01:06:36.880144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.142 [2024-07-16 01:06:36.890564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.142 [2024-07-16 01:06:36.890591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.142 [2024-07-16 01:06:36.901362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.142 [2024-07-16 01:06:36.901390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.142 [2024-07-16 01:06:36.913725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.142 [2024-07-16 01:06:36.913753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.142 [2024-07-16 01:06:36.923882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.142 [2024-07-16 01:06:36.923909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.142 [2024-07-16 01:06:36.934566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.142 [2024-07-16 01:06:36.934593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:36.947445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:36.947472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:36.957967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:36.957995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:36.968704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:36.968732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:36.979305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:36.979333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:36.990074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:36.990109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.003342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.003370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.013389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.013417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.024325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.024352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.034806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.034834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.045554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.045586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.058072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.058100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.067720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.067748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.079901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.079929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.092523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.092551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.103025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.103053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.113574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.113602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.124407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.124436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.143 [2024-07-16 01:06:37.135215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.143 [2024-07-16 01:06:37.135244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.147653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.147681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.157386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.157413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.167911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.167963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.178952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.178988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.191715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.191742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.202506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.202541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.213330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.213359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.226184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.226223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.236317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.236345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.247279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.247307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.257882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.257909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.268435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.268462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.279315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.416 [2024-07-16 01:06:37.279343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.416 [2024-07-16 01:06:37.290299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.290327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.300924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.300951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.311764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.311791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.322218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.322246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.333224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.333266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.346057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.346084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.356398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.356425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.366894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.366921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.377761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.377788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.388322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.388350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.417 [2024-07-16 01:06:37.400995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.417 [2024-07-16 01:06:37.401023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.410685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.410722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.421863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.421890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.434187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.434215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.444074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.444102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.454721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.454749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.465279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.465306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.477354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.477381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.487593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.487620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.498427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.498469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.511406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.511434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.521153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.521181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.531915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.531966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.543056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.543084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.553709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.553736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.565782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.565810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.574967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.574994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.586519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.586547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.597150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.597178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.607986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.608015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.620813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.620848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.631311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.631339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.642029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.642057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.654613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.654641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.675 [2024-07-16 01:06:37.664529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.675 [2024-07-16 01:06:37.664557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.675457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.675484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.686342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.686370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.697157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.697188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.707843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.707870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.718409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.718437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.729182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.729210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.739799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.739826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.752847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.752875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.763364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.763391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.774207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.774235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.786641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.786669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.796806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.796834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.807646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.807674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.820000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.820028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.829749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.829786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.840304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.840332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.850552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.850579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.861103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.861131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.871950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.871986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.882719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.882746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.895269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.895297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.905222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.905251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.934 [2024-07-16 01:06:37.916126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.934 [2024-07-16 01:06:37.916155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:37.928843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:37.928871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:37.940190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:37.940218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:37.949426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:37.949454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:37.961319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:37.961347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:37.971684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:37.971712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:37.982377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:37.982405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:37.994981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:37.995010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.005282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.005312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.015835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.015863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.027102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.027131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.039697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.039725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.050270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.050297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.061016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.061045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.073722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.073750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.083579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.083606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.094102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.094130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.104872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.104900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.117439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.117466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.127394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.127421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.138246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.193 [2024-07-16 01:06:38.138289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.193 [2024-07-16 01:06:38.149215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.194 [2024-07-16 01:06:38.149244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.194 [2024-07-16 01:06:38.159589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.194 [2024-07-16 01:06:38.159616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.194 [2024-07-16 01:06:38.170059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.194 [2024-07-16 01:06:38.170087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.194 [2024-07-16 01:06:38.181128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.194 [2024-07-16 01:06:38.181157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.191931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.191983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.202699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.202727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.215926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.215967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.226226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.226255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.236528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.236557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.247393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.247421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.260467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.260494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.270754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.270781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.281222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.281250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.452 [2024-07-16 01:06:38.291524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.452 [2024-07-16 01:06:38.291552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.302124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.302152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.312913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.312964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.323295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.323323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.333782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.333810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.344394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.344421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.356534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.356561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.365736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.365763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.377176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.377204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.387623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.387651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.398727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.398754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.411286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.411314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.420842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.420871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.432174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.432202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.453 [2024-07-16 01:06:38.442654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.453 [2024-07-16 01:06:38.442682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.453520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.453548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.464170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.464198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.474646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.474673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.485035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.485063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.495348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.495375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.505898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.505925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.518090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.518119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.527429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.527457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.538842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.538871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.551566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.551593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.561376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.561403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.572275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.572302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.583082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.583110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.593872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.593900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.606394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.606422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.616809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.616836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.711 [2024-07-16 01:06:38.627384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.711 [2024-07-16 01:06:38.627411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.712 [2024-07-16 01:06:38.637692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.712 [2024-07-16 01:06:38.637720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.712 [2024-07-16 01:06:38.647789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.712 [2024-07-16 01:06:38.647825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.712 [2024-07-16 01:06:38.658085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.712 [2024-07-16 01:06:38.658114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.712 [2024-07-16 01:06:38.668591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.712 [2024-07-16 01:06:38.668619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.712 [2024-07-16 01:06:38.678823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.712 [2024-07-16 01:06:38.678851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.712 [2024-07-16 01:06:38.689569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.712 [2024-07-16 01:06:38.689597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.712 [2024-07-16 01:06:38.700447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.712 [2024-07-16 01:06:38.700475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.711079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.711108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.721727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.721755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.732213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.732256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.742864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.742893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.753421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.753449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.763803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.763830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.774435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.774463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.785279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.785307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.797341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.797368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.806893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.806921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.818094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.818123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.830357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.830385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.839606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.839634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.851111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.851147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.861704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.861731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.872574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.872601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.883093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.883121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.893856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.893883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.906681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.906708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.916898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.916926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.927914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.927941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.940386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.940413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.950896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.950923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.970 [2024-07-16 01:06:38.961799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.970 [2024-07-16 01:06:38.961826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:38.974473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:38.974501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:38.986102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:38.986131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:38.995700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:38.995727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.006994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.007022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.017671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.017698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.028336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.028364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.041152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.041181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.050717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.050744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.061254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.061303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.073398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.073425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.082390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.082418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.093632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.093660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.104307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.104335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.114777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.114805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.125493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.125521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.136121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.136149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.146854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.146881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.159233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.159277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.169602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.169630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.180317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.180344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.190947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.190989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.201646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.201674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.229 [2024-07-16 01:06:39.212485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.229 [2024-07-16 01:06:39.212512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.223159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.223187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.234085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.234113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.245398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.245426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.255879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.255907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.266259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.266314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.276707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.276734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.287416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.287445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.297901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.297928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.308527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.308554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.319657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.319685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.332352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.332380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.343929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.343980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.353475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.353502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.365131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.365159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.487 [2024-07-16 01:06:39.375704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.487 [2024-07-16 01:06:39.375732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.386090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.386118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.396775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.396803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.407712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.407741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.418258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.418286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.429220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.429263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.439986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.440014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.452492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.452520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.462511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.462539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.488 [2024-07-16 01:06:39.472951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.488 [2024-07-16 01:06:39.472999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.483754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.483783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.494627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.494655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.505167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.505196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.515843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.515871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.526530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.526558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.540129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.540158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.550212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.550255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.560625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.560652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.571435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.571463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.583741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.583768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.593369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.593397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.606461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.606489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.617357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.617385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.627859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.627886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.638314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.638342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.649296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.649324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.661581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.661609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.670711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.670739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.682153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.682181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.694558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.694585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.703480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.703507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.716664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.716692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.726659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.726685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.746 [2024-07-16 01:06:39.737465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.746 [2024-07-16 01:06:39.737508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.004 [2024-07-16 01:06:39.750331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.004 [2024-07-16 01:06:39.750358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.004 [2024-07-16 01:06:39.760645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.004 [2024-07-16 01:06:39.760673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.004 [2024-07-16 01:06:39.771665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.771692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.782248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.782291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.792631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.792658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.803120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.803149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.813847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.813876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.826753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.826780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.836782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.836809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.847655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.847683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.858539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.858566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.869311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.869339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.882590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.882618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.893073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.893102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.904237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.904283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.917046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.917074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.927493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.927521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.938020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.938048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.948541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.948568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.961502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.961530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.972005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.972034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.982326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.982354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.005 [2024-07-16 01:06:39.992897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.005 [2024-07-16 01:06:39.992925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.262 [2024-07-16 01:06:40.005341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.262 [2024-07-16 01:06:40.005371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.262 [2024-07-16 01:06:40.016915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.262 [2024-07-16 01:06:40.016962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.262 [2024-07-16 01:06:40.028192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.262 [2024-07-16 01:06:40.028227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.262 [2024-07-16 01:06:40.038862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.038893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.049343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.049371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.059977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.060006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.072583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.072611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.082863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.082891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.093287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.093315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.104040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.104068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.116682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.116709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.127009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.127038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.138127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.138156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.148678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.148705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.159577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.159604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.172157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.172186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.182351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.182379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.193290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.193319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.204105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.204134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.214690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.214718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.227253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.227296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.236185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.236214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.263 [2024-07-16 01:06:40.248927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.263 [2024-07-16 01:06:40.248966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.520 [2024-07-16 01:06:40.259114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.520 [2024-07-16 01:06:40.259142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.520 [2024-07-16 01:06:40.269811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.520 [2024-07-16 01:06:40.269839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.280696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.280726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.293504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.293532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.303932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.303976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.315051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.315079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.328275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.328304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.338525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.338569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.349336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.349364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.360338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.360366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.371300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.371328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.382207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.382235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.397057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.397087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.408835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.408864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.417768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.417797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.429060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.429089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.441919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.441947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.452244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.452272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.462749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.462777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.473427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.473456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.485788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.485817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.495681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.495710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.521 [2024-07-16 01:06:40.506202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.521 [2024-07-16 01:06:40.506230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.516782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.516819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.527409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.527438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.538017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.538045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.548778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.548807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.559411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.559439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.570474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.570502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.581155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.581183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.591914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.591943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.602894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.602922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.615835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.615867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.626053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.626090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.636650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.636679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.647129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.647157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.657661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.657690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.668355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.668384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.679125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.679153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.689709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.689738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.700376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.700405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.711058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.711092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.723951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.723995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.734377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.734406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.745511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.745539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.756538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.756565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.779 [2024-07-16 01:06:40.767132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.779 [2024-07-16 01:06:40.767161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.777969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.777998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.788807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.788833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.801514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.801541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.811579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.811606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.822096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.822125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.834231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.834273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.843593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.843620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.855055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.855083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.866141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.866170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.876892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.876933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.887871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.887898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.898779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.898807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.037 [2024-07-16 01:06:40.909587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.037 [2024-07-16 01:06:40.909614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.920243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.920287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.932325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.932362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.941794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.941822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.953554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.953582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.964196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.964225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.976587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.976616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.986797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.986824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:40.997808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:40.997836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:41.008766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:41.008795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.038 [2024-07-16 01:06:41.019442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.038 [2024-07-16 01:06:41.019470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.032063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.032091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.042377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.042406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.052800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.052828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.063579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.063608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.074621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.074651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.087253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.087282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.097096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.097124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.107782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.107810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.118721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.118749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.130762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.130791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.140226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.140278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.151996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.152024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.159029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.159057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 00:13:25.296 Latency(us) 00:13:25.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.296 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:25.296 Nvme1n1 : 5.01 11884.49 92.85 0.00 0.00 10754.80 5097.24 24660.95 00:13:25.296 =================================================================================================================== 00:13:25.296 Total : 11884.49 92.85 0.00 0.00 10754.80 5097.24 24660.95 00:13:25.296 [2024-07-16 01:06:41.166671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.166696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.174692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.174716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.182720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.182746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.190792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.190843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.198806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.198858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.206827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.206878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.214848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.214895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.222885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.222934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.230898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.230949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.238918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.238977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.246944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.247010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.254973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.255033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.296 [2024-07-16 01:06:41.262995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.296 [2024-07-16 01:06:41.263047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.297 [2024-07-16 01:06:41.271011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.297 [2024-07-16 01:06:41.271064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.297 [2024-07-16 01:06:41.279036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.297 [2024-07-16 01:06:41.279085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.297 [2024-07-16 01:06:41.287059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.297 [2024-07-16 01:06:41.287106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.295083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.295131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.303073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.303101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.311080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.311102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.319098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.319119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.327125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.327147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.335167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.335200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.343221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.343270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.351240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.351289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.359213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.359252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.367233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.367267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.375267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.375289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.383308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.383329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.391349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.391398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.399379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.399431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.407370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.407405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.415361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.415382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 [2024-07-16 01:06:41.423388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.555 [2024-07-16 01:06:41.423408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4128287) - No such process 00:13:25.555 01:06:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4128287 00:13:25.555 01:06:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:25.556 delay0 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.556 01:06:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:25.556 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.556 [2024-07-16 01:06:41.501452] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:32.105 Initializing NVMe Controllers 00:13:32.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:32.105 Initialization complete. Launching workers. 00:13:32.105 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 67 00:13:32.105 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 354, failed to submit 33 00:13:32.105 success 157, unsuccess 197, failed 0 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.105 rmmod nvme_tcp 00:13:32.105 rmmod nvme_fabrics 00:13:32.105 rmmod nvme_keyring 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4126952 ']' 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4126952 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 4126952 ']' 00:13:32.105 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 4126952 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4126952 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4126952' 00:13:32.106 killing process with pid 4126952 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 4126952 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 4126952 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.106 01:06:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.004 01:06:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:34.004 00:13:34.004 real 0m27.931s 00:13:34.004 user 0m39.742s 00:13:34.004 sys 0m8.938s 00:13:34.004 01:06:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.004 01:06:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 ************************************ 00:13:34.004 END TEST nvmf_zcopy 00:13:34.004 ************************************ 00:13:34.004 01:06:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:34.004 01:06:49 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:34.004 01:06:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:34.004 01:06:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.004 01:06:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:34.262 ************************************ 00:13:34.262 START TEST nvmf_nmic 00:13:34.262 ************************************ 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:34.262 * Looking for test storage... 00:13:34.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.262 01:06:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:36.207 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:36.208 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:36.208 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:36.208 Found net devices under 0000:09:00.0: cvl_0_0 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:36.208 Found net devices under 0000:09:00.1: cvl_0_1 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.208 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:36.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:13:36.481 00:13:36.481 --- 10.0.0.2 ping statistics --- 00:13:36.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.481 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:13:36.481 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:13:36.482 00:13:36.482 --- 10.0.0.1 ping statistics --- 00:13:36.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.482 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4131551 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4131551 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 4131551 ']' 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.482 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.482 [2024-07-16 01:06:52.342237] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:13:36.482 [2024-07-16 01:06:52.342338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.482 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.482 [2024-07-16 01:06:52.410152] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.740 [2024-07-16 01:06:52.523404] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.740 [2024-07-16 01:06:52.523455] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.740 [2024-07-16 01:06:52.523478] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.740 [2024-07-16 01:06:52.523488] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.740 [2024-07-16 01:06:52.523498] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.740 [2024-07-16 01:06:52.523592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.740 [2024-07-16 01:06:52.523654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.740 [2024-07-16 01:06:52.523727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.740 [2024-07-16 01:06:52.523730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.740 [2024-07-16 01:06:52.678802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.740 Malloc0 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.740 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.740 [2024-07-16 01:06:52.732652] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:36.997 test case1: single bdev can't be used in multiple subsystems 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.997 [2024-07-16 01:06:52.756475] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:36.997 [2024-07-16 01:06:52.756503] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:36.997 [2024-07-16 01:06:52.756527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.997 request: 00:13:36.997 { 00:13:36.997 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:36.997 "namespace": { 00:13:36.997 "bdev_name": "Malloc0", 00:13:36.997 "no_auto_visible": false 00:13:36.997 }, 00:13:36.997 "method": "nvmf_subsystem_add_ns", 00:13:36.997 "req_id": 1 00:13:36.997 } 00:13:36.997 Got JSON-RPC error response 00:13:36.997 response: 00:13:36.997 { 00:13:36.997 "code": -32602, 00:13:36.997 "message": "Invalid parameters" 00:13:36.997 } 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:36.997 Adding namespace failed - expected result. 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:36.997 test case2: host connect to nvmf target in multiple paths 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.997 [2024-07-16 01:06:52.764588] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.997 01:06:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.562 01:06:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:38.126 01:06:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.126 01:06:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:38.126 01:06:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.126 01:06:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:38.126 01:06:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:40.654 01:06:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:40.654 01:06:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:40.654 01:06:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.654 01:06:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:40.654 01:06:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.654 01:06:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:40.654 01:06:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:40.654 [global] 00:13:40.654 thread=1 00:13:40.654 invalidate=1 00:13:40.654 rw=write 00:13:40.654 time_based=1 00:13:40.654 runtime=1 00:13:40.654 ioengine=libaio 00:13:40.654 direct=1 00:13:40.654 bs=4096 00:13:40.654 iodepth=1 00:13:40.654 norandommap=0 00:13:40.654 numjobs=1 00:13:40.654 00:13:40.654 verify_dump=1 00:13:40.654 verify_backlog=512 00:13:40.654 verify_state_save=0 00:13:40.654 do_verify=1 00:13:40.654 verify=crc32c-intel 00:13:40.654 [job0] 00:13:40.654 filename=/dev/nvme0n1 00:13:40.654 Could not set queue depth (nvme0n1) 00:13:40.654 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.654 fio-3.35 00:13:40.654 Starting 1 thread 00:13:41.586 00:13:41.586 job0: (groupid=0, jobs=1): err= 0: pid=4132182: Tue Jul 16 01:06:57 2024 00:13:41.586 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:41.586 slat (nsec): min=4715, max=78135, avg=10104.51, stdev=7586.12 00:13:41.587 clat (usec): min=188, max=538, avg=232.10, stdev=36.06 00:13:41.587 lat (usec): min=193, max=564, avg=242.21, stdev=40.60 00:13:41.587 clat percentiles (usec): 00:13:41.587 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:13:41.587 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:13:41.587 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 289], 00:13:41.587 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 502], 99.95th=[ 529], 00:13:41.587 | 99.99th=[ 537] 00:13:41.587 write: IOPS=2457, BW=9830KiB/s (10.1MB/s)(9840KiB/1001msec); 0 zone resets 00:13:41.587 slat (usec): min=6, max=29467, avg=25.03, stdev=593.90 00:13:41.587 clat (usec): min=128, max=384, avg=174.11, stdev=36.43 00:13:41.587 lat (usec): min=139, max=29702, avg=199.14, stdev=596.28 00:13:41.587 clat percentiles (usec): 00:13:41.587 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:13:41.587 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:13:41.587 | 70.00th=[ 174], 80.00th=[ 188], 90.00th=[ 237], 95.00th=[ 258], 00:13:41.587 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 383], 99.95th=[ 383], 00:13:41.587 | 99.99th=[ 383] 00:13:41.587 bw ( KiB/s): min= 9520, max= 9520, per=96.84%, avg=9520.00, stdev= 0.00, samples=1 00:13:41.587 iops : min= 2380, max= 2380, avg=2380.00, stdev= 0.00, samples=1 00:13:41.587 lat (usec) : 250=89.84%, 500=10.09%, 750=0.07% 00:13:41.587 cpu : usr=3.20%, sys=4.90%, ctx=4510, majf=0, minf=2 00:13:41.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.587 issued rwts: total=2048,2460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.587 00:13:41.587 Run status group 0 (all jobs): 00:13:41.587 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:13:41.587 WRITE: bw=9830KiB/s (10.1MB/s), 9830KiB/s-9830KiB/s (10.1MB/s-10.1MB/s), io=9840KiB (10.1MB), run=1001-1001msec 00:13:41.587 00:13:41.587 Disk stats (read/write): 00:13:41.587 nvme0n1: ios=1974/2048, merge=0/0, ticks=1406/343, in_queue=1749, util=98.60% 00:13:41.587 01:06:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:41.845 rmmod nvme_tcp 00:13:41.845 rmmod nvme_fabrics 00:13:41.845 rmmod nvme_keyring 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4131551 ']' 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4131551 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 4131551 ']' 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 4131551 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4131551 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4131551' 00:13:41.845 killing process with pid 4131551 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 4131551 00:13:41.845 01:06:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 4131551 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.102 01:06:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.639 01:07:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:44.639 00:13:44.639 real 0m10.054s 00:13:44.639 user 0m22.535s 00:13:44.639 sys 0m2.517s 00:13:44.639 01:07:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.639 01:07:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.639 ************************************ 00:13:44.639 END TEST nvmf_nmic 00:13:44.639 ************************************ 00:13:44.639 01:07:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:44.639 01:07:00 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:44.639 01:07:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.639 01:07:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.639 01:07:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.639 ************************************ 00:13:44.639 START TEST nvmf_fio_target 00:13:44.639 ************************************ 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:44.639 * Looking for test storage... 00:13:44.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.639 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.640 01:07:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:46.543 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:46.543 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:46.543 Found net devices under 0000:09:00.0: cvl_0_0 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.543 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:46.544 Found net devices under 0000:09:00.1: cvl_0_1 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:13:46.544 00:13:46.544 --- 10.0.0.2 ping statistics --- 00:13:46.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.544 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:13:46.544 00:13:46.544 --- 10.0.0.1 ping statistics --- 00:13:46.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.544 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4134266 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4134266 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 4134266 ']' 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.544 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.802 [2024-07-16 01:07:02.537595] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:13:46.802 [2024-07-16 01:07:02.537689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.802 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.802 [2024-07-16 01:07:02.602218] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.802 [2024-07-16 01:07:02.706903] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.802 [2024-07-16 01:07:02.706994] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.802 [2024-07-16 01:07:02.707018] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.802 [2024-07-16 01:07:02.707031] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.802 [2024-07-16 01:07:02.707042] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.802 [2024-07-16 01:07:02.707110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.802 [2024-07-16 01:07:02.707161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.802 [2024-07-16 01:07:02.707211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.802 [2024-07-16 01:07:02.707209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.059 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.059 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:47.059 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.059 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.059 01:07:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.059 01:07:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.059 01:07:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:47.316 [2024-07-16 01:07:03.070426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.316 01:07:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.574 01:07:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:47.574 01:07:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.830 01:07:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:47.830 01:07:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.087 01:07:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:48.087 01:07:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.343 01:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:48.343 01:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:48.600 01:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.856 01:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:48.856 01:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.112 01:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:49.112 01:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.369 01:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:49.369 01:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:49.625 01:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.881 01:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:49.881 01:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.137 01:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:50.137 01:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.394 01:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.394 [2024-07-16 01:07:06.380837] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.650 01:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:50.907 01:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:50.907 01:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.837 01:07:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:51.837 01:07:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.837 01:07:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.837 01:07:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:51.837 01:07:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:51.837 01:07:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:53.731 01:07:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:53.731 01:07:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:53.731 01:07:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.731 01:07:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:53.731 01:07:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.731 01:07:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:53.731 01:07:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:53.731 [global] 00:13:53.731 thread=1 00:13:53.731 invalidate=1 00:13:53.731 rw=write 00:13:53.731 time_based=1 00:13:53.731 runtime=1 00:13:53.731 ioengine=libaio 00:13:53.731 direct=1 00:13:53.731 bs=4096 00:13:53.731 iodepth=1 00:13:53.731 norandommap=0 00:13:53.731 numjobs=1 00:13:53.731 00:13:53.731 verify_dump=1 00:13:53.732 verify_backlog=512 00:13:53.732 verify_state_save=0 00:13:53.732 do_verify=1 00:13:53.732 verify=crc32c-intel 00:13:53.732 [job0] 00:13:53.732 filename=/dev/nvme0n1 00:13:53.732 [job1] 00:13:53.732 filename=/dev/nvme0n2 00:13:53.732 [job2] 00:13:53.732 filename=/dev/nvme0n3 00:13:53.732 [job3] 00:13:53.732 filename=/dev/nvme0n4 00:13:53.732 Could not set queue depth (nvme0n1) 00:13:53.732 Could not set queue depth (nvme0n2) 00:13:53.732 Could not set queue depth (nvme0n3) 00:13:53.732 Could not set queue depth (nvme0n4) 00:13:53.989 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.989 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.989 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.989 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.989 fio-3.35 00:13:53.989 Starting 4 threads 00:13:55.391 00:13:55.391 job0: (groupid=0, jobs=1): err= 0: pid=4135334: Tue Jul 16 01:07:11 2024 00:13:55.391 read: IOPS=1023, BW=4095KiB/s (4193kB/s)(4136KiB/1010msec) 00:13:55.391 slat (nsec): min=4567, max=66909, avg=10074.10, stdev=6021.33 00:13:55.391 clat (usec): min=206, max=41032, avg=664.89, stdev=3986.22 00:13:55.391 lat (usec): min=212, max=41049, avg=674.96, stdev=3987.48 00:13:55.391 clat percentiles (usec): 00:13:55.391 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:13:55.391 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:13:55.391 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 375], 95.00th=[ 392], 00:13:55.391 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:55.391 | 99.99th=[41157] 00:13:55.392 write: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec); 0 zone resets 00:13:55.392 slat (nsec): min=5717, max=35029, avg=11827.46, stdev=5116.44 00:13:55.392 clat (usec): min=140, max=2404, avg=186.26, stdev=86.65 00:13:55.392 lat (usec): min=148, max=2411, avg=198.09, stdev=85.93 00:13:55.392 clat percentiles (usec): 00:13:55.392 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:13:55.392 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:13:55.392 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 229], 95.00th=[ 289], 00:13:55.392 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 2311], 99.95th=[ 2409], 00:13:55.392 | 99.99th=[ 2409] 00:13:55.392 bw ( KiB/s): min= 4096, max= 8192, per=28.72%, avg=6144.00, stdev=2896.31, samples=2 00:13:55.392 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:13:55.392 lat (usec) : 250=72.06%, 500=27.32%, 750=0.16% 00:13:55.392 lat (msec) : 4=0.08%, 50=0.39% 00:13:55.392 cpu : usr=1.29%, sys=3.07%, ctx=2570, majf=0, minf=2 00:13:55.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.392 job1: (groupid=0, jobs=1): err= 0: pid=4135335: Tue Jul 16 01:07:11 2024 00:13:55.392 read: IOPS=232, BW=930KiB/s (952kB/s)(956KiB/1028msec) 00:13:55.392 slat (nsec): min=4502, max=33682, avg=10992.51, stdev=6161.82 00:13:55.392 clat (usec): min=230, max=41991, avg=3755.01, stdev=11444.40 00:13:55.392 lat (usec): min=237, max=42009, avg=3766.00, stdev=11448.06 00:13:55.392 clat percentiles (usec): 00:13:55.392 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 251], 00:13:55.392 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 297], 60.00th=[ 318], 00:13:55.392 | 70.00th=[ 359], 80.00th=[ 388], 90.00th=[ 424], 95.00th=[41681], 00:13:55.392 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:55.392 | 99.99th=[42206] 00:13:55.392 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:13:55.392 slat (nsec): min=6424, max=35492, avg=9808.77, stdev=4209.92 00:13:55.392 clat (usec): min=149, max=624, avg=235.34, stdev=82.17 00:13:55.392 lat (usec): min=157, max=634, avg=245.15, stdev=84.91 00:13:55.392 clat percentiles (usec): 00:13:55.392 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:13:55.392 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 204], 60.00th=[ 231], 00:13:55.392 | 70.00th=[ 249], 80.00th=[ 289], 90.00th=[ 379], 95.00th=[ 396], 00:13:55.392 | 99.00th=[ 445], 99.50th=[ 486], 99.90th=[ 627], 99.95th=[ 627], 00:13:55.392 | 99.99th=[ 627] 00:13:55.392 bw ( KiB/s): min= 4096, max= 4096, per=19.15%, avg=4096.00, stdev= 0.00, samples=1 00:13:55.392 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:55.392 lat (usec) : 250=53.26%, 500=43.81%, 750=0.27% 00:13:55.392 lat (msec) : 50=2.66% 00:13:55.392 cpu : usr=0.49%, sys=0.58%, ctx=752, majf=0, minf=1 00:13:55.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 issued rwts: total=239,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.392 job2: (groupid=0, jobs=1): err= 0: pid=4135336: Tue Jul 16 01:07:11 2024 00:13:55.392 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:55.392 slat (nsec): min=5265, max=69694, avg=19940.51, stdev=10231.37 00:13:55.392 clat (usec): min=234, max=587, avg=351.24, stdev=51.03 00:13:55.392 lat (usec): min=242, max=593, avg=371.18, stdev=51.83 00:13:55.392 clat percentiles (usec): 00:13:55.392 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 306], 00:13:55.392 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 359], 00:13:55.392 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 433], 00:13:55.392 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[ 562], 99.95th=[ 586], 00:13:55.392 | 99.99th=[ 586] 00:13:55.392 write: IOPS=1707, BW=6829KiB/s (6993kB/s)(6836KiB/1001msec); 0 zone resets 00:13:55.392 slat (nsec): min=6899, max=62270, avg=14022.64, stdev=6394.76 00:13:55.392 clat (usec): min=157, max=591, avg=228.53, stdev=35.45 00:13:55.392 lat (usec): min=167, max=601, avg=242.55, stdev=34.26 00:13:55.392 clat percentiles (usec): 00:13:55.392 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:13:55.392 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:13:55.392 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 277], 00:13:55.392 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 570], 99.95th=[ 594], 00:13:55.392 | 99.99th=[ 594] 00:13:55.392 bw ( KiB/s): min= 8192, max= 8192, per=38.29%, avg=8192.00, stdev= 0.00, samples=1 00:13:55.392 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:55.392 lat (usec) : 250=44.78%, 500=54.67%, 750=0.55% 00:13:55.392 cpu : usr=2.90%, sys=6.00%, ctx=3248, majf=0, minf=1 00:13:55.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 issued rwts: total=1536,1709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.392 job3: (groupid=0, jobs=1): err= 0: pid=4135337: Tue Jul 16 01:07:11 2024 00:13:55.392 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:55.392 slat (nsec): min=6034, max=39323, avg=14882.95, stdev=5982.01 00:13:55.392 clat (usec): min=235, max=633, avg=352.68, stdev=42.36 00:13:55.392 lat (usec): min=242, max=641, avg=367.56, stdev=42.65 00:13:55.392 clat percentiles (usec): 00:13:55.392 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 322], 00:13:55.392 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:13:55.392 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 420], 00:13:55.392 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[ 603], 99.95th=[ 635], 00:13:55.392 | 99.99th=[ 635] 00:13:55.392 write: IOPS=1739, BW=6957KiB/s (7124kB/s)(6964KiB/1001msec); 0 zone resets 00:13:55.392 slat (nsec): min=6463, max=67730, avg=15254.30, stdev=6996.27 00:13:55.392 clat (usec): min=155, max=2314, avg=226.38, stdev=61.25 00:13:55.392 lat (usec): min=173, max=2321, avg=241.64, stdev=60.52 00:13:55.392 clat percentiles (usec): 00:13:55.392 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:13:55.392 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:13:55.392 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 293], 00:13:55.392 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 537], 99.95th=[ 2311], 00:13:55.392 | 99.99th=[ 2311] 00:13:55.392 bw ( KiB/s): min= 8192, max= 8192, per=38.29%, avg=8192.00, stdev= 0.00, samples=1 00:13:55.392 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:55.392 lat (usec) : 250=43.91%, 500=55.36%, 750=0.70% 00:13:55.392 lat (msec) : 4=0.03% 00:13:55.392 cpu : usr=4.70%, sys=5.80%, ctx=3278, majf=0, minf=1 00:13:55.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.392 issued rwts: total=1536,1741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.392 00:13:55.392 Run status group 0 (all jobs): 00:13:55.392 READ: bw=16.5MiB/s (17.3MB/s), 930KiB/s-6138KiB/s (952kB/s-6285kB/s), io=17.0MiB (17.8MB), run=1001-1028msec 00:13:55.392 WRITE: bw=20.9MiB/s (21.9MB/s), 1992KiB/s-6957KiB/s (2040kB/s-7124kB/s), io=21.5MiB (22.5MB), run=1001-1028msec 00:13:55.392 00:13:55.392 Disk stats (read/write): 00:13:55.392 nvme0n1: ios=1080/1536, merge=0/0, ticks=535/283, in_queue=818, util=86.57% 00:13:55.392 nvme0n2: ios=276/512, merge=0/0, ticks=803/119, in_queue=922, util=90.34% 00:13:55.392 nvme0n3: ios=1271/1536, merge=0/0, ticks=1328/337, in_queue=1665, util=93.42% 00:13:55.392 nvme0n4: ios=1317/1536, merge=0/0, ticks=509/341, in_queue=850, util=95.47% 00:13:55.392 01:07:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:55.392 [global] 00:13:55.392 thread=1 00:13:55.392 invalidate=1 00:13:55.392 rw=randwrite 00:13:55.392 time_based=1 00:13:55.392 runtime=1 00:13:55.392 ioengine=libaio 00:13:55.392 direct=1 00:13:55.392 bs=4096 00:13:55.392 iodepth=1 00:13:55.392 norandommap=0 00:13:55.392 numjobs=1 00:13:55.392 00:13:55.392 verify_dump=1 00:13:55.392 verify_backlog=512 00:13:55.392 verify_state_save=0 00:13:55.392 do_verify=1 00:13:55.392 verify=crc32c-intel 00:13:55.392 [job0] 00:13:55.392 filename=/dev/nvme0n1 00:13:55.392 [job1] 00:13:55.392 filename=/dev/nvme0n2 00:13:55.392 [job2] 00:13:55.392 filename=/dev/nvme0n3 00:13:55.392 [job3] 00:13:55.392 filename=/dev/nvme0n4 00:13:55.392 Could not set queue depth (nvme0n1) 00:13:55.392 Could not set queue depth (nvme0n2) 00:13:55.392 Could not set queue depth (nvme0n3) 00:13:55.392 Could not set queue depth (nvme0n4) 00:13:55.392 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.392 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.392 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.392 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.392 fio-3.35 00:13:55.392 Starting 4 threads 00:13:56.771 00:13:56.771 job0: (groupid=0, jobs=1): err= 0: pid=4135566: Tue Jul 16 01:07:12 2024 00:13:56.771 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:13:56.771 slat (nsec): min=8071, max=34761, avg=26493.86, stdev=9697.07 00:13:56.771 clat (usec): min=4916, max=41034, avg=39251.59, stdev=7867.15 00:13:56.771 lat (usec): min=4950, max=41047, avg=39278.08, stdev=7865.38 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[ 4948], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:56.771 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:56.771 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.771 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:56.771 | 99.99th=[41157] 00:13:56.771 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:13:56.771 slat (nsec): min=7488, max=31582, avg=11816.74, stdev=5078.84 00:13:56.771 clat (usec): min=161, max=1148, avg=330.00, stdev=101.29 00:13:56.771 lat (usec): min=169, max=1156, avg=341.82, stdev=101.45 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[ 174], 5.00th=[ 192], 10.00th=[ 206], 20.00th=[ 253], 00:13:56.771 | 30.00th=[ 281], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 355], 00:13:56.771 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 441], 95.00th=[ 461], 00:13:56.771 | 99.00th=[ 529], 99.50th=[ 832], 99.90th=[ 1156], 99.95th=[ 1156], 00:13:56.771 | 99.99th=[ 1156] 00:13:56.771 bw ( KiB/s): min= 4096, max= 4096, per=32.71%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.771 lat (usec) : 250=18.76%, 500=75.23%, 750=1.31%, 1000=0.38% 00:13:56.771 lat (msec) : 2=0.38%, 10=0.19%, 50=3.75% 00:13:56.771 cpu : usr=0.20%, sys=1.10%, ctx=534, majf=0, minf=2 00:13:56.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.771 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.771 job1: (groupid=0, jobs=1): err= 0: pid=4135567: Tue Jul 16 01:07:12 2024 00:13:56.771 read: IOPS=198, BW=792KiB/s (811kB/s)(816KiB/1030msec) 00:13:56.771 slat (nsec): min=5786, max=37131, avg=10356.45, stdev=7128.69 00:13:56.771 clat (usec): min=244, max=42036, avg=4155.64, stdev=11910.75 00:13:56.771 lat (usec): min=252, max=42068, avg=4166.00, stdev=11916.33 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[ 262], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 289], 00:13:56.771 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 326], 00:13:56.771 | 70.00th=[ 351], 80.00th=[ 437], 90.00th=[ 594], 95.00th=[41157], 00:13:56.771 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:56.771 | 99.99th=[42206] 00:13:56.771 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:13:56.771 slat (nsec): min=7158, max=37840, avg=12542.10, stdev=4474.22 00:13:56.771 clat (usec): min=176, max=1013, avg=333.84, stdev=95.27 00:13:56.771 lat (usec): min=187, max=1026, avg=346.39, stdev=95.96 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[ 188], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 235], 00:13:56.771 | 30.00th=[ 262], 40.00th=[ 306], 50.00th=[ 343], 60.00th=[ 383], 00:13:56.771 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 461], 00:13:56.771 | 99.00th=[ 545], 99.50th=[ 701], 99.90th=[ 1012], 99.95th=[ 1012], 00:13:56.771 | 99.99th=[ 1012] 00:13:56.771 bw ( KiB/s): min= 4096, max= 4096, per=32.71%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.771 lat (usec) : 250=18.99%, 500=75.84%, 750=2.09%, 1000=0.14% 00:13:56.771 lat (msec) : 2=0.14%, 4=0.14%, 50=2.65% 00:13:56.771 cpu : usr=0.29%, sys=1.07%, ctx=718, majf=0, minf=1 00:13:56.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.771 issued rwts: total=204,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.771 job2: (groupid=0, jobs=1): err= 0: pid=4135568: Tue Jul 16 01:07:12 2024 00:13:56.771 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:56.771 slat (nsec): min=4876, max=64325, avg=15384.51, stdev=7715.91 00:13:56.771 clat (usec): min=209, max=41034, avg=381.19, stdev=1044.16 00:13:56.771 lat (usec): min=219, max=41047, avg=396.57, stdev=1044.48 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[ 227], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 265], 00:13:56.771 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 318], 60.00th=[ 334], 00:13:56.771 | 70.00th=[ 429], 80.00th=[ 461], 90.00th=[ 506], 95.00th=[ 537], 00:13:56.771 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 2507], 99.95th=[41157], 00:13:56.771 | 99.99th=[41157] 00:13:56.771 write: IOPS=1686, BW=6745KiB/s (6907kB/s)(6752KiB/1001msec); 0 zone resets 00:13:56.771 slat (nsec): min=6287, max=69156, avg=15716.44, stdev=8818.32 00:13:56.771 clat (usec): min=151, max=434, avg=207.58, stdev=41.29 00:13:56.771 lat (usec): min=158, max=472, avg=223.30, stdev=44.85 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:13:56.771 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:13:56.771 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 306], 00:13:56.771 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 424], 99.95th=[ 437], 00:13:56.771 | 99.99th=[ 437] 00:13:56.771 bw ( KiB/s): min= 8192, max= 8192, per=65.43%, avg=8192.00, stdev= 0.00, samples=1 00:13:56.771 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:56.771 lat (usec) : 250=53.07%, 500=41.25%, 750=5.61% 00:13:56.771 lat (msec) : 4=0.03%, 50=0.03% 00:13:56.771 cpu : usr=3.40%, sys=7.00%, ctx=3224, majf=0, minf=1 00:13:56.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.771 issued rwts: total=1536,1688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.771 job3: (groupid=0, jobs=1): err= 0: pid=4135569: Tue Jul 16 01:07:12 2024 00:13:56.771 read: IOPS=19, BW=79.9KiB/s (81.8kB/s)(80.0KiB/1001msec) 00:13:56.771 slat (nsec): min=7880, max=50891, avg=29806.25, stdev=11647.06 00:13:56.771 clat (usec): min=40630, max=41117, avg=40945.35, stdev=93.08 00:13:56.771 lat (usec): min=40638, max=41131, avg=40975.16, stdev=94.59 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:56.771 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:56.771 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.771 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:56.771 | 99.99th=[41157] 00:13:56.771 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:13:56.771 slat (nsec): min=8020, max=41985, avg=14401.89, stdev=6387.25 00:13:56.771 clat (usec): min=163, max=626, avg=335.44, stdev=91.26 00:13:56.771 lat (usec): min=176, max=643, avg=349.84, stdev=92.68 00:13:56.771 clat percentiles (usec): 00:13:56.771 | 1.00th=[ 176], 5.00th=[ 208], 10.00th=[ 223], 20.00th=[ 235], 00:13:56.771 | 30.00th=[ 269], 40.00th=[ 302], 50.00th=[ 330], 60.00th=[ 375], 00:13:56.771 | 70.00th=[ 400], 80.00th=[ 412], 90.00th=[ 457], 95.00th=[ 478], 00:13:56.771 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 627], 99.95th=[ 627], 00:13:56.771 | 99.99th=[ 627] 00:13:56.771 bw ( KiB/s): min= 4096, max= 4096, per=32.71%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.771 lat (usec) : 250=25.00%, 500=68.61%, 750=2.63% 00:13:56.771 lat (msec) : 50=3.76% 00:13:56.771 cpu : usr=0.60%, sys=0.80%, ctx=533, majf=0, minf=1 00:13:56.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.772 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.772 00:13:56.772 Run status group 0 (all jobs): 00:13:56.772 READ: bw=6917KiB/s (7083kB/s), 79.9KiB/s-6138KiB/s (81.8kB/s-6285kB/s), io=7124KiB (7295kB), run=1001-1030msec 00:13:56.772 WRITE: bw=12.2MiB/s (12.8MB/s), 1988KiB/s-6745KiB/s (2036kB/s-6907kB/s), io=12.6MiB (13.2MB), run=1001-1030msec 00:13:56.772 00:13:56.772 Disk stats (read/write): 00:13:56.772 nvme0n1: ios=60/512, merge=0/0, ticks=960/172, in_queue=1132, util=98.70% 00:13:56.772 nvme0n2: ios=229/512, merge=0/0, ticks=944/170, in_queue=1114, util=97.26% 00:13:56.772 nvme0n3: ios=1279/1536, merge=0/0, ticks=463/286, in_queue=749, util=89.04% 00:13:56.772 nvme0n4: ios=40/512, merge=0/0, ticks=1641/156, in_queue=1797, util=98.74% 00:13:56.772 01:07:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:56.772 [global] 00:13:56.772 thread=1 00:13:56.772 invalidate=1 00:13:56.772 rw=write 00:13:56.772 time_based=1 00:13:56.772 runtime=1 00:13:56.772 ioengine=libaio 00:13:56.772 direct=1 00:13:56.772 bs=4096 00:13:56.772 iodepth=128 00:13:56.772 norandommap=0 00:13:56.772 numjobs=1 00:13:56.772 00:13:56.772 verify_dump=1 00:13:56.772 verify_backlog=512 00:13:56.772 verify_state_save=0 00:13:56.772 do_verify=1 00:13:56.772 verify=crc32c-intel 00:13:56.772 [job0] 00:13:56.772 filename=/dev/nvme0n1 00:13:56.772 [job1] 00:13:56.772 filename=/dev/nvme0n2 00:13:56.772 [job2] 00:13:56.772 filename=/dev/nvme0n3 00:13:56.772 [job3] 00:13:56.772 filename=/dev/nvme0n4 00:13:56.772 Could not set queue depth (nvme0n1) 00:13:56.772 Could not set queue depth (nvme0n2) 00:13:56.772 Could not set queue depth (nvme0n3) 00:13:56.772 Could not set queue depth (nvme0n4) 00:13:57.029 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.029 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.029 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.029 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.029 fio-3.35 00:13:57.030 Starting 4 threads 00:13:58.402 00:13:58.402 job0: (groupid=0, jobs=1): err= 0: pid=4135803: Tue Jul 16 01:07:14 2024 00:13:58.402 read: IOPS=2807, BW=11.0MiB/s (11.5MB/s)(11.5MiB/1047msec) 00:13:58.402 slat (usec): min=3, max=19869, avg=169.48, stdev=1092.31 00:13:58.402 clat (usec): min=8202, max=89884, avg=22480.03, stdev=16440.00 00:13:58.402 lat (msec): min=8, max=106, avg=22.65, stdev=16.57 00:13:58.402 clat percentiles (usec): 00:13:58.402 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10683], 00:13:58.402 | 30.00th=[11469], 40.00th=[13304], 50.00th=[18220], 60.00th=[19006], 00:13:58.402 | 70.00th=[20317], 80.00th=[31065], 90.00th=[50070], 95.00th=[55313], 00:13:58.402 | 99.00th=[89654], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:13:58.402 | 99.99th=[89654] 00:13:58.402 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1047msec); 0 zone resets 00:13:58.402 slat (usec): min=4, max=25580, avg=152.32, stdev=1009.52 00:13:58.402 clat (usec): min=6212, max=61504, avg=21616.63, stdev=12415.39 00:13:58.402 lat (usec): min=6276, max=61520, avg=21768.96, stdev=12507.79 00:13:58.402 clat percentiles (usec): 00:13:58.402 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[10683], 00:13:58.402 | 30.00th=[11076], 40.00th=[14222], 50.00th=[14877], 60.00th=[23987], 00:13:58.402 | 70.00th=[26608], 80.00th=[31589], 90.00th=[41157], 95.00th=[48497], 00:13:58.402 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[56886], 00:13:58.402 | 99.99th=[61604] 00:13:58.402 bw ( KiB/s): min=11672, max=12904, per=21.24%, avg=12288.00, stdev=871.16, samples=2 00:13:58.402 iops : min= 2918, max= 3226, avg=3072.00, stdev=217.79, samples=2 00:13:58.402 lat (msec) : 10=7.82%, 20=52.97%, 50=33.27%, 100=5.94% 00:13:58.402 cpu : usr=4.59%, sys=7.46%, ctx=247, majf=0, minf=13 00:13:58.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:58.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.402 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.402 job1: (groupid=0, jobs=1): err= 0: pid=4135804: Tue Jul 16 01:07:14 2024 00:13:58.402 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:13:58.402 slat (usec): min=2, max=12644, avg=93.85, stdev=742.97 00:13:58.402 clat (usec): min=1450, max=31962, avg=12707.93, stdev=4858.65 00:13:58.402 lat (usec): min=1497, max=31978, avg=12801.79, stdev=4919.36 00:13:58.402 clat percentiles (usec): 00:13:58.402 | 1.00th=[ 2311], 5.00th=[ 5735], 10.00th=[ 7767], 20.00th=[ 9110], 00:13:58.402 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11469], 60.00th=[12911], 00:13:58.402 | 70.00th=[14615], 80.00th=[16581], 90.00th=[19530], 95.00th=[21890], 00:13:58.402 | 99.00th=[28443], 99.50th=[28443], 99.90th=[30016], 99.95th=[31065], 00:13:58.402 | 99.99th=[31851] 00:13:58.402 write: IOPS=5405, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1002msec); 0 zone resets 00:13:58.402 slat (usec): min=3, max=10265, avg=78.80, stdev=562.78 00:13:58.402 clat (usec): min=237, max=26634, avg=11358.39, stdev=4515.57 00:13:58.402 lat (usec): min=418, max=26638, avg=11437.19, stdev=4557.76 00:13:58.402 clat percentiles (usec): 00:13:58.402 | 1.00th=[ 1270], 5.00th=[ 4686], 10.00th=[ 6521], 20.00th=[ 8160], 00:13:58.402 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10814], 60.00th=[11469], 00:13:58.402 | 70.00th=[12125], 80.00th=[13829], 90.00th=[18744], 95.00th=[20317], 00:13:58.402 | 99.00th=[23987], 99.50th=[24249], 99.90th=[25560], 99.95th=[25560], 00:13:58.402 | 99.99th=[26608] 00:13:58.402 bw ( KiB/s): min=20480, max=20480, per=35.40%, avg=20480.00, stdev= 0.00, samples=1 00:13:58.402 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:13:58.402 lat (usec) : 250=0.01%, 500=0.08%, 1000=0.22% 00:13:58.402 lat (msec) : 2=0.76%, 4=2.04%, 10=30.03%, 20=60.45%, 50=6.42% 00:13:58.402 cpu : usr=3.80%, sys=9.89%, ctx=359, majf=0, minf=11 00:13:58.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:58.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.402 issued rwts: total=5120,5416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.402 job2: (groupid=0, jobs=1): err= 0: pid=4135817: Tue Jul 16 01:07:14 2024 00:13:58.402 read: IOPS=2536, BW=9.91MiB/s (10.4MB/s)(9.97MiB/1006msec) 00:13:58.402 slat (usec): min=2, max=27594, avg=207.67, stdev=1530.12 00:13:58.402 clat (usec): min=2701, max=88839, avg=24715.87, stdev=13688.98 00:13:58.402 lat (usec): min=8686, max=95962, avg=24923.54, stdev=13854.11 00:13:58.402 clat percentiles (usec): 00:13:58.402 | 1.00th=[11994], 5.00th=[14222], 10.00th=[14615], 20.00th=[15270], 00:13:58.402 | 30.00th=[16581], 40.00th=[17433], 50.00th=[18744], 60.00th=[21365], 00:13:58.402 | 70.00th=[25035], 80.00th=[31851], 90.00th=[46924], 95.00th=[54789], 00:13:58.402 | 99.00th=[71828], 99.50th=[82314], 99.90th=[88605], 99.95th=[88605], 00:13:58.402 | 99.99th=[88605] 00:13:58.402 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:13:58.402 slat (usec): min=3, max=18814, avg=173.83, stdev=912.03 00:13:58.402 clat (usec): min=5969, max=76009, avg=25056.85, stdev=14007.85 00:13:58.402 lat (usec): min=5974, max=76020, avg=25230.68, stdev=14092.43 00:13:58.402 clat percentiles (usec): 00:13:58.402 | 1.00th=[ 6980], 5.00th=[10683], 10.00th=[12256], 20.00th=[14091], 00:13:58.402 | 30.00th=[15139], 40.00th=[17957], 50.00th=[20579], 60.00th=[25035], 00:13:58.402 | 70.00th=[26870], 80.00th=[35914], 90.00th=[44303], 95.00th=[53216], 00:13:58.402 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:13:58.402 | 99.99th=[76022] 00:13:58.402 bw ( KiB/s): min= 8192, max=12288, per=17.70%, avg=10240.00, stdev=2896.31, samples=2 00:13:58.402 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:13:58.402 lat (msec) : 4=0.02%, 10=2.58%, 20=49.73%, 50=40.53%, 100=7.14% 00:13:58.402 cpu : usr=2.49%, sys=3.98%, ctx=248, majf=0, minf=13 00:13:58.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:58.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.402 issued rwts: total=2552,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.402 job3: (groupid=0, jobs=1): err= 0: pid=4135823: Tue Jul 16 01:07:14 2024 00:13:58.402 read: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1007msec) 00:13:58.402 slat (usec): min=2, max=30543, avg=138.36, stdev=1040.31 00:13:58.402 clat (usec): min=2041, max=59132, avg=17420.60, stdev=7548.88 00:13:58.402 lat (usec): min=2048, max=59135, avg=17558.96, stdev=7602.84 00:13:58.402 clat percentiles (usec): 00:13:58.402 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[12518], 00:13:58.403 | 30.00th=[13566], 40.00th=[14091], 50.00th=[15270], 60.00th=[15926], 00:13:58.403 | 70.00th=[17695], 80.00th=[20317], 90.00th=[29230], 95.00th=[32900], 00:13:58.403 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[56886], 00:13:58.403 | 99.99th=[58983] 00:13:58.403 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:13:58.403 slat (usec): min=3, max=11143, avg=103.95, stdev=602.89 00:13:58.403 clat (usec): min=4300, max=33866, avg=14243.98, stdev=3523.85 00:13:58.403 lat (usec): min=4512, max=33879, avg=14347.93, stdev=3560.81 00:13:58.403 clat percentiles (usec): 00:13:58.403 | 1.00th=[ 7504], 5.00th=[ 9372], 10.00th=[10814], 20.00th=[12125], 00:13:58.403 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13304], 60.00th=[13829], 00:13:58.403 | 70.00th=[15401], 80.00th=[16712], 90.00th=[19006], 95.00th=[21365], 00:13:58.403 | 99.00th=[24773], 99.50th=[24773], 99.90th=[26346], 99.95th=[29492], 00:13:58.403 | 99.99th=[33817] 00:13:58.403 bw ( KiB/s): min=16384, max=16384, per=28.32%, avg=16384.00, stdev= 0.00, samples=2 00:13:58.403 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:58.403 lat (msec) : 4=0.19%, 10=6.63%, 20=79.11%, 50=13.54%, 100=0.54% 00:13:58.403 cpu : usr=3.38%, sys=6.56%, ctx=355, majf=0, minf=13 00:13:58.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:58.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.403 issued rwts: total=3932,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.403 00:13:58.403 Run status group 0 (all jobs): 00:13:58.403 READ: bw=54.3MiB/s (56.9MB/s), 9.91MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=56.8MiB (59.6MB), run=1002-1047msec 00:13:58.403 WRITE: bw=56.5MiB/s (59.2MB/s), 9.94MiB/s-21.1MiB/s (10.4MB/s-22.1MB/s), io=59.2MiB (62.0MB), run=1002-1047msec 00:13:58.403 00:13:58.403 Disk stats (read/write): 00:13:58.403 nvme0n1: ios=2610/2908, merge=0/0, ticks=14707/18764, in_queue=33471, util=86.27% 00:13:58.403 nvme0n2: ios=4133/4167, merge=0/0, ticks=49442/41306, in_queue=90748, util=98.07% 00:13:58.403 nvme0n3: ios=2070/2407, merge=0/0, ticks=25043/30673, in_queue=55716, util=100.00% 00:13:58.403 nvme0n4: ios=3353/3584, merge=0/0, ticks=30740/22235, in_queue=52975, util=90.18% 00:13:58.403 01:07:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:58.403 [global] 00:13:58.403 thread=1 00:13:58.403 invalidate=1 00:13:58.403 rw=randwrite 00:13:58.403 time_based=1 00:13:58.403 runtime=1 00:13:58.403 ioengine=libaio 00:13:58.403 direct=1 00:13:58.403 bs=4096 00:13:58.403 iodepth=128 00:13:58.403 norandommap=0 00:13:58.403 numjobs=1 00:13:58.403 00:13:58.403 verify_dump=1 00:13:58.403 verify_backlog=512 00:13:58.403 verify_state_save=0 00:13:58.403 do_verify=1 00:13:58.403 verify=crc32c-intel 00:13:58.403 [job0] 00:13:58.403 filename=/dev/nvme0n1 00:13:58.403 [job1] 00:13:58.403 filename=/dev/nvme0n2 00:13:58.403 [job2] 00:13:58.403 filename=/dev/nvme0n3 00:13:58.403 [job3] 00:13:58.403 filename=/dev/nvme0n4 00:13:58.403 Could not set queue depth (nvme0n1) 00:13:58.403 Could not set queue depth (nvme0n2) 00:13:58.403 Could not set queue depth (nvme0n3) 00:13:58.403 Could not set queue depth (nvme0n4) 00:13:58.403 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.403 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.403 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.403 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.403 fio-3.35 00:13:58.403 Starting 4 threads 00:13:59.778 00:13:59.778 job0: (groupid=0, jobs=1): err= 0: pid=4136151: Tue Jul 16 01:07:15 2024 00:13:59.778 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:13:59.778 slat (usec): min=2, max=14027, avg=111.17, stdev=629.65 00:13:59.778 clat (usec): min=3639, max=30407, avg=14398.85, stdev=4760.90 00:13:59.778 lat (usec): min=3672, max=30449, avg=14510.02, stdev=4779.62 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[ 7504], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11207], 00:13:59.778 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12911], 60.00th=[13960], 00:13:59.778 | 70.00th=[14877], 80.00th=[16188], 90.00th=[22414], 95.00th=[25297], 00:13:59.778 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:13:59.778 | 99.99th=[30278] 00:13:59.778 write: IOPS=4887, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec); 0 zone resets 00:13:59.778 slat (usec): min=3, max=12240, avg=84.37, stdev=504.95 00:13:59.778 clat (usec): min=310, max=66576, avg=12319.16, stdev=6692.79 00:13:59.778 lat (usec): min=347, max=66581, avg=12403.53, stdev=6705.03 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[ 1713], 5.00th=[ 3654], 10.00th=[ 6063], 20.00th=[ 9241], 00:13:59.778 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11731], 60.00th=[12387], 00:13:59.778 | 70.00th=[13829], 80.00th=[14615], 90.00th=[17171], 95.00th=[21890], 00:13:59.778 | 99.00th=[43254], 99.50th=[56361], 99.90th=[66323], 99.95th=[66323], 00:13:59.778 | 99.99th=[66323] 00:13:59.778 bw ( KiB/s): min=19720, max=19720, per=29.85%, avg=19720.00, stdev= 0.00, samples=1 00:13:59.778 iops : min= 4930, max= 4930, avg=4930.00, stdev= 0.00, samples=1 00:13:59.778 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.16% 00:13:59.778 lat (msec) : 2=0.37%, 4=3.61%, 10=17.20%, 20=68.36%, 50=9.83% 00:13:59.778 lat (msec) : 100=0.41% 00:13:59.778 cpu : usr=6.00%, sys=9.30%, ctx=435, majf=0, minf=11 00:13:59.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:59.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.778 issued rwts: total=4608,4892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.778 job1: (groupid=0, jobs=1): err= 0: pid=4136152: Tue Jul 16 01:07:15 2024 00:13:59.778 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:59.778 slat (usec): min=2, max=45165, avg=146.80, stdev=1149.32 00:13:59.778 clat (usec): min=7453, max=69161, avg=19356.02, stdev=14117.87 00:13:59.778 lat (usec): min=8126, max=69177, avg=19502.82, stdev=14175.56 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10290], 00:13:59.778 | 30.00th=[11076], 40.00th=[12125], 50.00th=[12518], 60.00th=[15401], 00:13:59.778 | 70.00th=[20841], 80.00th=[22414], 90.00th=[42206], 95.00th=[55313], 00:13:59.778 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:13:59.778 | 99.99th=[68682] 00:13:59.778 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec); 0 zone resets 00:13:59.778 slat (usec): min=3, max=9549, avg=106.09, stdev=476.00 00:13:59.778 clat (usec): min=332, max=62574, avg=14391.45, stdev=9474.45 00:13:59.778 lat (usec): min=1328, max=62600, avg=14497.54, stdev=9525.98 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[ 3523], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[ 9372], 00:13:59.778 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11469], 60.00th=[11994], 00:13:59.778 | 70.00th=[13566], 80.00th=[19268], 90.00th=[22414], 95.00th=[25297], 00:13:59.778 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62653], 99.95th=[62653], 00:13:59.778 | 99.99th=[62653] 00:13:59.778 bw ( KiB/s): min=16384, max=16384, per=24.80%, avg=16384.00, stdev= 0.00, samples=1 00:13:59.778 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:59.778 lat (usec) : 500=0.01% 00:13:59.778 lat (msec) : 2=0.03%, 4=1.00%, 10=24.18%, 20=49.05%, 50=20.32% 00:13:59.778 lat (msec) : 100=5.40% 00:13:59.778 cpu : usr=4.30%, sys=7.10%, ctx=485, majf=0, minf=13 00:13:59.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:59.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.778 issued rwts: total=3584,4004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.778 job2: (groupid=0, jobs=1): err= 0: pid=4136153: Tue Jul 16 01:07:15 2024 00:13:59.778 read: IOPS=3168, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1002msec) 00:13:59.778 slat (usec): min=2, max=10231, avg=139.74, stdev=719.50 00:13:59.778 clat (usec): min=1020, max=53889, avg=17088.91, stdev=6783.65 00:13:59.778 lat (usec): min=1412, max=53895, avg=17228.65, stdev=6814.18 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[ 5014], 5.00th=[11600], 10.00th=[12125], 20.00th=[13042], 00:13:59.778 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14615], 60.00th=[15139], 00:13:59.778 | 70.00th=[17433], 80.00th=[22414], 90.00th=[24773], 95.00th=[27919], 00:13:59.778 | 99.00th=[46924], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:13:59.778 | 99.99th=[53740] 00:13:59.778 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:13:59.778 slat (usec): min=3, max=23772, avg=143.85, stdev=929.55 00:13:59.778 clat (usec): min=9978, max=64518, avg=19891.70, stdev=11555.29 00:13:59.778 lat (usec): min=9999, max=64524, avg=20035.55, stdev=11607.27 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[11076], 5.00th=[11600], 10.00th=[11863], 20.00th=[13304], 00:13:59.778 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:13:59.778 | 70.00th=[17171], 80.00th=[28967], 90.00th=[36439], 95.00th=[49021], 00:13:59.778 | 99.00th=[57410], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:13:59.778 | 99.99th=[64750] 00:13:59.778 bw ( KiB/s): min=12288, max=12288, per=18.60%, avg=12288.00, stdev= 0.00, samples=1 00:13:59.778 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:59.778 lat (msec) : 2=0.12%, 10=0.98%, 20=73.99%, 50=22.52%, 100=2.40% 00:13:59.778 cpu : usr=5.29%, sys=8.29%, ctx=387, majf=0, minf=13 00:13:59.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:59.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.778 issued rwts: total=3175,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.778 job3: (groupid=0, jobs=1): err= 0: pid=4136154: Tue Jul 16 01:07:15 2024 00:13:59.778 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:13:59.778 slat (usec): min=2, max=12116, avg=112.54, stdev=658.92 00:13:59.778 clat (usec): min=5784, max=38970, avg=13876.30, stdev=4538.61 00:13:59.778 lat (usec): min=5795, max=38976, avg=13988.84, stdev=4579.40 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[ 7111], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11207], 00:13:59.778 | 30.00th=[11469], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:13:59.778 | 70.00th=[13829], 80.00th=[15139], 90.00th=[17695], 95.00th=[22938], 00:13:59.778 | 99.00th=[34866], 99.50th=[36439], 99.90th=[38011], 99.95th=[39060], 00:13:59.778 | 99.99th=[39060] 00:13:59.778 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:13:59.778 slat (usec): min=3, max=24233, avg=135.05, stdev=829.26 00:13:59.778 clat (usec): min=344, max=73617, avg=18515.80, stdev=11252.05 00:13:59.778 lat (usec): min=2976, max=73626, avg=18650.85, stdev=11315.51 00:13:59.778 clat percentiles (usec): 00:13:59.778 | 1.00th=[ 3523], 5.00th=[ 7373], 10.00th=[ 9241], 20.00th=[11994], 00:13:59.778 | 30.00th=[12780], 40.00th=[13304], 50.00th=[14222], 60.00th=[19530], 00:13:59.778 | 70.00th=[21627], 80.00th=[22414], 90.00th=[28705], 95.00th=[37487], 00:13:59.778 | 99.00th=[65799], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:13:59.779 | 99.99th=[73925] 00:13:59.779 bw ( KiB/s): min=13872, max=17808, per=23.97%, avg=15840.00, stdev=2783.17, samples=2 00:13:59.779 iops : min= 3468, max= 4452, avg=3960.00, stdev=695.79, samples=2 00:13:59.779 lat (usec) : 500=0.01% 00:13:59.779 lat (msec) : 4=0.70%, 10=8.79%, 20=66.94%, 50=21.51%, 100=2.05% 00:13:59.779 cpu : usr=3.39%, sys=6.19%, ctx=450, majf=0, minf=13 00:13:59.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:59.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.779 issued rwts: total=3584,4088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.779 00:13:59.779 Run status group 0 (all jobs): 00:13:59.779 READ: bw=58.2MiB/s (61.1MB/s), 12.4MiB/s-18.0MiB/s (13.0MB/s-18.9MB/s), io=58.4MiB (61.2MB), run=1001-1003msec 00:13:59.779 WRITE: bw=64.5MiB/s (67.7MB/s), 14.0MiB/s-19.1MiB/s (14.7MB/s-20.0MB/s), io=64.7MiB (67.9MB), run=1001-1003msec 00:13:59.779 00:13:59.779 Disk stats (read/write): 00:13:59.779 nvme0n1: ios=4132/4096, merge=0/0, ticks=21703/22365, in_queue=44068, util=87.17% 00:13:59.779 nvme0n2: ios=2971/3072, merge=0/0, ticks=17272/14956, in_queue=32228, util=98.68% 00:13:59.779 nvme0n3: ios=2617/2895, merge=0/0, ticks=12487/14092, in_queue=26579, util=99.79% 00:13:59.779 nvme0n4: ios=3090/3072, merge=0/0, ticks=29956/40857, in_queue=70813, util=100.00% 00:13:59.779 01:07:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:59.779 01:07:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4136286 00:13:59.779 01:07:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:59.779 01:07:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:59.779 [global] 00:13:59.779 thread=1 00:13:59.779 invalidate=1 00:13:59.779 rw=read 00:13:59.779 time_based=1 00:13:59.779 runtime=10 00:13:59.779 ioengine=libaio 00:13:59.779 direct=1 00:13:59.779 bs=4096 00:13:59.779 iodepth=1 00:13:59.779 norandommap=1 00:13:59.779 numjobs=1 00:13:59.779 00:13:59.779 [job0] 00:13:59.779 filename=/dev/nvme0n1 00:13:59.779 [job1] 00:13:59.779 filename=/dev/nvme0n2 00:13:59.779 [job2] 00:13:59.779 filename=/dev/nvme0n3 00:13:59.779 [job3] 00:13:59.779 filename=/dev/nvme0n4 00:13:59.779 Could not set queue depth (nvme0n1) 00:13:59.779 Could not set queue depth (nvme0n2) 00:13:59.779 Could not set queue depth (nvme0n3) 00:13:59.779 Could not set queue depth (nvme0n4) 00:13:59.779 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.779 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.779 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.779 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.779 fio-3.35 00:13:59.779 Starting 4 threads 00:14:03.057 01:07:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:03.057 01:07:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:03.057 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=15372288, buflen=4096 00:14:03.057 fio: pid=4136387, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.057 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.057 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:03.057 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=806912, buflen=4096 00:14:03.057 fio: pid=4136386, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.315 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.315 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:03.315 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=397312, buflen=4096 00:14:03.315 fio: pid=4136384, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.573 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=42622976, buflen=4096 00:14:03.573 fio: pid=4136385, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.573 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.573 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:03.573 00:14:03.573 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4136384: Tue Jul 16 01:07:19 2024 00:14:03.573 read: IOPS=28, BW=114KiB/s (117kB/s)(388KiB/3397msec) 00:14:03.573 slat (usec): min=9, max=5923, avg=103.52, stdev=622.99 00:14:03.573 clat (usec): min=339, max=41126, avg=34667.68, stdev=14740.24 00:14:03.573 lat (usec): min=363, max=46975, avg=34771.90, stdev=14785.50 00:14:03.573 clat percentiles (usec): 00:14:03.573 | 1.00th=[ 338], 5.00th=[ 363], 10.00th=[ 392], 20.00th=[40633], 00:14:03.573 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:03.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:03.573 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:03.573 | 99.99th=[41157] 00:14:03.573 bw ( KiB/s): min= 96, max= 168, per=0.74%, avg=117.33, stdev=27.09, samples=6 00:14:03.573 iops : min= 24, max= 42, avg=29.33, stdev= 6.77, samples=6 00:14:03.573 lat (usec) : 500=14.29%, 750=1.02% 00:14:03.573 lat (msec) : 50=83.67% 00:14:03.573 cpu : usr=0.15%, sys=0.00%, ctx=100, majf=0, minf=1 00:14:03.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.573 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.573 issued rwts: total=98,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.573 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4136385: Tue Jul 16 01:07:19 2024 00:14:03.573 read: IOPS=2857, BW=11.2MiB/s (11.7MB/s)(40.6MiB/3642msec) 00:14:03.573 slat (usec): min=4, max=8940, avg=15.02, stdev=130.21 00:14:03.573 clat (usec): min=198, max=42076, avg=329.73, stdev=1626.26 00:14:03.573 lat (usec): min=211, max=51017, avg=343.99, stdev=1665.86 00:14:03.573 clat percentiles (usec): 00:14:03.573 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:14:03.573 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 260], 00:14:03.573 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 375], 00:14:03.573 | 99.00th=[ 461], 99.50th=[ 506], 99.90th=[42206], 99.95th=[42206], 00:14:03.573 | 99.99th=[42206] 00:14:03.573 bw ( KiB/s): min= 4428, max=15136, per=73.90%, avg=11732.00, stdev=4246.32, samples=7 00:14:03.573 iops : min= 1107, max= 3784, avg=2933.00, stdev=1061.58, samples=7 00:14:03.573 lat (usec) : 250=51.58%, 500=47.88%, 750=0.36%, 1000=0.02% 00:14:03.573 lat (msec) : 50=0.15% 00:14:03.573 cpu : usr=1.87%, sys=3.98%, ctx=10411, majf=0, minf=1 00:14:03.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.573 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.573 issued rwts: total=10407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.573 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4136386: Tue Jul 16 01:07:19 2024 00:14:03.573 read: IOPS=62, BW=250KiB/s (256kB/s)(788KiB/3146msec) 00:14:03.573 slat (nsec): min=4784, max=35557, avg=15174.38, stdev=9427.96 00:14:03.573 clat (usec): min=243, max=41483, avg=15835.91, stdev=19784.66 00:14:03.573 lat (usec): min=248, max=41502, avg=15851.10, stdev=19789.70 00:14:03.573 clat percentiles (usec): 00:14:03.573 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 322], 00:14:03.573 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 424], 60.00th=[ 553], 00:14:03.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:03.573 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:03.573 | 99.99th=[41681] 00:14:03.573 bw ( KiB/s): min= 96, max= 432, per=1.62%, avg=257.33, stdev=107.26, samples=6 00:14:03.573 iops : min= 24, max= 108, avg=64.33, stdev=26.82, samples=6 00:14:03.573 lat (usec) : 250=3.54%, 500=51.01%, 750=7.07% 00:14:03.573 lat (msec) : 50=37.88% 00:14:03.573 cpu : usr=0.00%, sys=0.19%, ctx=198, majf=0, minf=1 00:14:03.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.574 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.574 issued rwts: total=198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.574 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4136387: Tue Jul 16 01:07:19 2024 00:14:03.574 read: IOPS=1289, BW=5155KiB/s (5279kB/s)(14.7MiB/2912msec) 00:14:03.574 slat (nsec): min=4502, max=56302, avg=14322.07, stdev=6524.58 00:14:03.574 clat (usec): min=217, max=41063, avg=752.00, stdev=4226.91 00:14:03.574 lat (usec): min=223, max=41099, avg=766.32, stdev=4228.11 00:14:03.574 clat percentiles (usec): 00:14:03.574 | 1.00th=[ 237], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:14:03.574 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:14:03.574 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 383], 95.00th=[ 457], 00:14:03.574 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:03.574 | 99.99th=[41157] 00:14:03.574 bw ( KiB/s): min= 96, max=12336, per=37.72%, avg=5988.80, stdev=6063.22, samples=5 00:14:03.574 iops : min= 24, max= 3084, avg=1497.20, stdev=1515.81, samples=5 00:14:03.574 lat (usec) : 250=3.20%, 500=93.66%, 750=2.02% 00:14:03.574 lat (msec) : 50=1.09% 00:14:03.574 cpu : usr=0.65%, sys=3.40%, ctx=3756, majf=0, minf=1 00:14:03.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.574 issued rwts: total=3754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.574 00:14:03.574 Run status group 0 (all jobs): 00:14:03.574 READ: bw=15.5MiB/s (16.3MB/s), 114KiB/s-11.2MiB/s (117kB/s-11.7MB/s), io=56.5MiB (59.2MB), run=2912-3642msec 00:14:03.574 00:14:03.574 Disk stats (read/write): 00:14:03.574 nvme0n1: ios=96/0, merge=0/0, ticks=3325/0, in_queue=3325, util=95.77% 00:14:03.574 nvme0n2: ios=10421/0, merge=0/0, ticks=3484/0, in_queue=3484, util=99.33% 00:14:03.574 nvme0n3: ios=196/0, merge=0/0, ticks=3081/0, in_queue=3081, util=96.79% 00:14:03.574 nvme0n4: ios=3802/0, merge=0/0, ticks=3815/0, in_queue=3815, util=99.69% 00:14:03.832 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.832 01:07:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:04.089 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.089 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:04.347 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.347 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:04.605 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.605 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:04.863 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:04.863 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4136286 00:14:04.863 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:04.863 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.120 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:05.121 nvmf hotplug test: fio failed as expected 00:14:05.121 01:07:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.378 rmmod nvme_tcp 00:14:05.378 rmmod nvme_fabrics 00:14:05.378 rmmod nvme_keyring 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4134266 ']' 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4134266 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 4134266 ']' 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 4134266 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4134266 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4134266' 00:14:05.378 killing process with pid 4134266 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 4134266 00:14:05.378 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 4134266 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.636 01:07:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.163 01:07:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:08.163 00:14:08.163 real 0m23.489s 00:14:08.163 user 1m20.823s 00:14:08.163 sys 0m6.843s 00:14:08.163 01:07:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.163 01:07:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.163 ************************************ 00:14:08.163 END TEST nvmf_fio_target 00:14:08.163 ************************************ 00:14:08.163 01:07:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:08.163 01:07:23 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:08.163 01:07:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.163 01:07:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.163 01:07:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.163 ************************************ 00:14:08.163 START TEST nvmf_bdevio 00:14:08.163 ************************************ 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:08.163 * Looking for test storage... 00:14:08.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.163 01:07:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.058 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:10.059 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:10.059 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:10.059 Found net devices under 0000:09:00.0: cvl_0_0 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:10.059 Found net devices under 0000:09:00.1: cvl_0_1 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:14:10.059 00:14:10.059 --- 10.0.0.2 ping statistics --- 00:14:10.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.059 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:14:10.059 00:14:10.059 --- 10.0.0.1 ping statistics --- 00:14:10.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.059 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.059 01:07:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4139008 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4139008 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 4139008 ']' 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.059 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.316 [2024-07-16 01:07:26.058028] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:14:10.316 [2024-07-16 01:07:26.058109] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.316 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.316 [2024-07-16 01:07:26.122423] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.316 [2024-07-16 01:07:26.228993] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.316 [2024-07-16 01:07:26.229060] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.317 [2024-07-16 01:07:26.229073] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.317 [2024-07-16 01:07:26.229084] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.317 [2024-07-16 01:07:26.229093] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.317 [2024-07-16 01:07:26.229189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.317 [2024-07-16 01:07:26.229254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:10.317 [2024-07-16 01:07:26.229318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:10.317 [2024-07-16 01:07:26.229321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.574 [2024-07-16 01:07:26.376550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.574 Malloc0 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.574 [2024-07-16 01:07:26.429327] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.574 { 00:14:10.574 "params": { 00:14:10.574 "name": "Nvme$subsystem", 00:14:10.574 "trtype": "$TEST_TRANSPORT", 00:14:10.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.574 "adrfam": "ipv4", 00:14:10.574 "trsvcid": "$NVMF_PORT", 00:14:10.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.574 "hdgst": ${hdgst:-false}, 00:14:10.574 "ddgst": ${ddgst:-false} 00:14:10.574 }, 00:14:10.574 "method": "bdev_nvme_attach_controller" 00:14:10.574 } 00:14:10.574 EOF 00:14:10.574 )") 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:10.574 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:10.575 01:07:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.575 "params": { 00:14:10.575 "name": "Nvme1", 00:14:10.575 "trtype": "tcp", 00:14:10.575 "traddr": "10.0.0.2", 00:14:10.575 "adrfam": "ipv4", 00:14:10.575 "trsvcid": "4420", 00:14:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.575 "hdgst": false, 00:14:10.575 "ddgst": false 00:14:10.575 }, 00:14:10.575 "method": "bdev_nvme_attach_controller" 00:14:10.575 }' 00:14:10.575 [2024-07-16 01:07:26.475143] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:14:10.575 [2024-07-16 01:07:26.475222] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139045 ] 00:14:10.575 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.575 [2024-07-16 01:07:26.539908] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:10.832 [2024-07-16 01:07:26.656164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.832 [2024-07-16 01:07:26.656214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.832 [2024-07-16 01:07:26.656218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.090 I/O targets: 00:14:11.090 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:11.090 00:14:11.090 00:14:11.090 CUnit - A unit testing framework for C - Version 2.1-3 00:14:11.090 http://cunit.sourceforge.net/ 00:14:11.090 00:14:11.090 00:14:11.090 Suite: bdevio tests on: Nvme1n1 00:14:11.090 Test: blockdev write read block ...passed 00:14:11.090 Test: blockdev write zeroes read block ...passed 00:14:11.090 Test: blockdev write zeroes read no split ...passed 00:14:11.090 Test: blockdev write zeroes read split ...passed 00:14:11.090 Test: blockdev write zeroes read split partial ...passed 00:14:11.090 Test: blockdev reset ...[2024-07-16 01:07:26.958511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:11.090 [2024-07-16 01:07:26.958637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acd6d0 (9): Bad file descriptor 00:14:11.090 [2024-07-16 01:07:27.021100] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:11.090 passed 00:14:11.090 Test: blockdev write read 8 blocks ...passed 00:14:11.090 Test: blockdev write read size > 128k ...passed 00:14:11.090 Test: blockdev write read invalid size ...passed 00:14:11.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:11.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:11.347 Test: blockdev write read max offset ...passed 00:14:11.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:11.347 Test: blockdev writev readv 8 blocks ...passed 00:14:11.347 Test: blockdev writev readv 30 x 1block ...passed 00:14:11.347 Test: blockdev writev readv block ...passed 00:14:11.347 Test: blockdev writev readv size > 128k ...passed 00:14:11.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:11.347 Test: blockdev comparev and writev ...[2024-07-16 01:07:27.320135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.320170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:11.347 [2024-07-16 01:07:27.320194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.320212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:11.347 [2024-07-16 01:07:27.320592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.320617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:11.347 [2024-07-16 01:07:27.320639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.320654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:11.347 [2024-07-16 01:07:27.321012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.321037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:11.347 [2024-07-16 01:07:27.321059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.321074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:11.347 [2024-07-16 01:07:27.321439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.321462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:11.347 [2024-07-16 01:07:27.321484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.347 [2024-07-16 01:07:27.321499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:11.605 passed 00:14:11.605 Test: blockdev nvme passthru rw ...passed 00:14:11.605 Test: blockdev nvme passthru vendor specific ...[2024-07-16 01:07:27.405228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.605 [2024-07-16 01:07:27.405255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:11.605 [2024-07-16 01:07:27.405411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.605 [2024-07-16 01:07:27.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:11.605 [2024-07-16 01:07:27.405588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.605 [2024-07-16 01:07:27.405611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:11.605 [2024-07-16 01:07:27.405763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.605 [2024-07-16 01:07:27.405786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:11.605 passed 00:14:11.605 Test: blockdev nvme admin passthru ...passed 00:14:11.605 Test: blockdev copy ...passed 00:14:11.605 00:14:11.605 Run Summary: Type Total Ran Passed Failed Inactive 00:14:11.605 suites 1 1 n/a 0 0 00:14:11.605 tests 23 23 23 0 0 00:14:11.605 asserts 152 152 152 0 n/a 00:14:11.605 00:14:11.605 Elapsed time = 1.238 seconds 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.862 rmmod nvme_tcp 00:14:11.862 rmmod nvme_fabrics 00:14:11.862 rmmod nvme_keyring 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4139008 ']' 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4139008 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 4139008 ']' 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 4139008 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4139008 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4139008' 00:14:11.862 killing process with pid 4139008 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 4139008 00:14:11.862 01:07:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 4139008 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.122 01:07:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.676 01:07:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:14.676 00:14:14.676 real 0m6.453s 00:14:14.676 user 0m10.142s 00:14:14.676 sys 0m2.093s 00:14:14.676 01:07:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:14.676 01:07:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:14.676 ************************************ 00:14:14.676 END TEST nvmf_bdevio 00:14:14.676 ************************************ 00:14:14.676 01:07:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:14.676 01:07:30 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:14.676 01:07:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:14.676 01:07:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.676 01:07:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:14.676 ************************************ 00:14:14.676 START TEST nvmf_auth_target 00:14:14.676 ************************************ 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:14.676 * Looking for test storage... 00:14:14.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.676 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.677 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.677 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:14.677 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:14.677 01:07:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.677 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.578 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:16.579 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:16.579 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:16.579 Found net devices under 0000:09:00.0: cvl_0_0 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:16.579 Found net devices under 0000:09:00.1: cvl_0_1 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:14:16.579 00:14:16.579 --- 10.0.0.2 ping statistics --- 00:14:16.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.579 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:14:16.579 00:14:16.579 --- 10.0.0.1 ping statistics --- 00:14:16.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.579 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4141224 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4141224 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4141224 ']' 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.579 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.837 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4141243 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1d700634be3d37af3d128cc8fc3fafd22be3ee5118fe8792 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xYV 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1d700634be3d37af3d128cc8fc3fafd22be3ee5118fe8792 0 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1d700634be3d37af3d128cc8fc3fafd22be3ee5118fe8792 0 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1d700634be3d37af3d128cc8fc3fafd22be3ee5118fe8792 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:16.838 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xYV 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xYV 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.xYV 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=086190ce923a46051aaa4d3debf9888d4ffcde4cccb4ffa635bd195007497101 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1XE 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 086190ce923a46051aaa4d3debf9888d4ffcde4cccb4ffa635bd195007497101 3 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 086190ce923a46051aaa4d3debf9888d4ffcde4cccb4ffa635bd195007497101 3 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=086190ce923a46051aaa4d3debf9888d4ffcde4cccb4ffa635bd195007497101 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1XE 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1XE 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1XE 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d6ece02e201ca8ef5f747a5b23e63bb2 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:17.096 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dJ9 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d6ece02e201ca8ef5f747a5b23e63bb2 1 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d6ece02e201ca8ef5f747a5b23e63bb2 1 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d6ece02e201ca8ef5f747a5b23e63bb2 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dJ9 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dJ9 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.dJ9 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=401531a1b81f67ed8f530d51d5a51a93cb736cb4a687903c 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tGw 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 401531a1b81f67ed8f530d51d5a51a93cb736cb4a687903c 2 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 401531a1b81f67ed8f530d51d5a51a93cb736cb4a687903c 2 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=401531a1b81f67ed8f530d51d5a51a93cb736cb4a687903c 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tGw 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tGw 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.tGw 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:17.097 01:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dbc18471c2d4cdc952f0a984310a09e6b77da5717a2f969d 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.f6b 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dbc18471c2d4cdc952f0a984310a09e6b77da5717a2f969d 2 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dbc18471c2d4cdc952f0a984310a09e6b77da5717a2f969d 2 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dbc18471c2d4cdc952f0a984310a09e6b77da5717a2f969d 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.f6b 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.f6b 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.f6b 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d09b51355a1539d9397377f0230f897b 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GPm 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d09b51355a1539d9397377f0230f897b 1 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d09b51355a1539d9397377f0230f897b 1 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d09b51355a1539d9397377f0230f897b 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:17.097 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GPm 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GPm 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.GPm 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e4d821c3f2123a8892fbc8b789f1a671fce69440009e819f481270df521ccda8 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.woO 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e4d821c3f2123a8892fbc8b789f1a671fce69440009e819f481270df521ccda8 3 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e4d821c3f2123a8892fbc8b789f1a671fce69440009e819f481270df521ccda8 3 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e4d821c3f2123a8892fbc8b789f1a671fce69440009e819f481270df521ccda8 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.woO 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.woO 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.woO 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4141224 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4141224 ']' 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.355 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4141243 /var/tmp/host.sock 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4141243 ']' 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:17.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.611 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xYV 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xYV 00:14:17.868 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xYV 00:14:18.125 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1XE ]] 00:14:18.125 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1XE 00:14:18.125 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.125 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.125 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.125 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1XE 00:14:18.125 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1XE 00:14:18.382 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:18.382 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dJ9 00:14:18.382 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.382 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.382 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.382 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dJ9 00:14:18.382 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dJ9 00:14:18.640 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.tGw ]] 00:14:18.640 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tGw 00:14:18.640 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.640 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.640 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.640 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tGw 00:14:18.640 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tGw 00:14:18.897 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:18.897 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.f6b 00:14:18.897 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.897 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.897 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.897 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.f6b 00:14:18.897 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.f6b 00:14:19.154 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.GPm ]] 00:14:19.154 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GPm 00:14:19.154 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.154 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.154 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.154 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GPm 00:14:19.154 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GPm 00:14:19.411 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:19.411 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.woO 00:14:19.411 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.411 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.411 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.411 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.woO 00:14:19.411 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.woO 00:14:19.668 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:19.668 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:19.668 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.668 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.668 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.668 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.925 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.183 00:14:20.183 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.183 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.183 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.439 { 00:14:20.439 "cntlid": 1, 00:14:20.439 "qid": 0, 00:14:20.439 "state": "enabled", 00:14:20.439 "thread": "nvmf_tgt_poll_group_000", 00:14:20.439 "listen_address": { 00:14:20.439 "trtype": "TCP", 00:14:20.439 "adrfam": "IPv4", 00:14:20.439 "traddr": "10.0.0.2", 00:14:20.439 "trsvcid": "4420" 00:14:20.439 }, 00:14:20.439 "peer_address": { 00:14:20.439 "trtype": "TCP", 00:14:20.439 "adrfam": "IPv4", 00:14:20.439 "traddr": "10.0.0.1", 00:14:20.439 "trsvcid": "33586" 00:14:20.439 }, 00:14:20.439 "auth": { 00:14:20.439 "state": "completed", 00:14:20.439 "digest": "sha256", 00:14:20.439 "dhgroup": "null" 00:14:20.439 } 00:14:20.439 } 00:14:20.439 ]' 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.439 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.696 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.626 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.882 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.139 00:14:22.139 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.139 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.139 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.396 { 00:14:22.396 "cntlid": 3, 00:14:22.396 "qid": 0, 00:14:22.396 "state": "enabled", 00:14:22.396 "thread": "nvmf_tgt_poll_group_000", 00:14:22.396 "listen_address": { 00:14:22.396 "trtype": "TCP", 00:14:22.396 "adrfam": "IPv4", 00:14:22.396 "traddr": "10.0.0.2", 00:14:22.396 "trsvcid": "4420" 00:14:22.396 }, 00:14:22.396 "peer_address": { 00:14:22.396 "trtype": "TCP", 00:14:22.396 "adrfam": "IPv4", 00:14:22.396 "traddr": "10.0.0.1", 00:14:22.396 "trsvcid": "33612" 00:14:22.396 }, 00:14:22.396 "auth": { 00:14:22.396 "state": "completed", 00:14:22.396 "digest": "sha256", 00:14:22.396 "dhgroup": "null" 00:14:22.396 } 00:14:22.396 } 00:14:22.396 ]' 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.396 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.652 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:22.652 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.652 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.652 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.652 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.907 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:23.836 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.094 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.095 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.095 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.095 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.351 00:14:24.351 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.351 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.351 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.609 { 00:14:24.609 "cntlid": 5, 00:14:24.609 "qid": 0, 00:14:24.609 "state": "enabled", 00:14:24.609 "thread": "nvmf_tgt_poll_group_000", 00:14:24.609 "listen_address": { 00:14:24.609 "trtype": "TCP", 00:14:24.609 "adrfam": "IPv4", 00:14:24.609 "traddr": "10.0.0.2", 00:14:24.609 "trsvcid": "4420" 00:14:24.609 }, 00:14:24.609 "peer_address": { 00:14:24.609 "trtype": "TCP", 00:14:24.609 "adrfam": "IPv4", 00:14:24.609 "traddr": "10.0.0.1", 00:14:24.609 "trsvcid": "33642" 00:14:24.609 }, 00:14:24.609 "auth": { 00:14:24.609 "state": "completed", 00:14:24.609 "digest": "sha256", 00:14:24.609 "dhgroup": "null" 00:14:24.609 } 00:14:24.609 } 00:14:24.609 ]' 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.609 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.866 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.798 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.056 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.331 00:14:26.331 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.331 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.331 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.589 { 00:14:26.589 "cntlid": 7, 00:14:26.589 "qid": 0, 00:14:26.589 "state": "enabled", 00:14:26.589 "thread": "nvmf_tgt_poll_group_000", 00:14:26.589 "listen_address": { 00:14:26.589 "trtype": "TCP", 00:14:26.589 "adrfam": "IPv4", 00:14:26.589 "traddr": "10.0.0.2", 00:14:26.589 "trsvcid": "4420" 00:14:26.589 }, 00:14:26.589 "peer_address": { 00:14:26.589 "trtype": "TCP", 00:14:26.589 "adrfam": "IPv4", 00:14:26.589 "traddr": "10.0.0.1", 00:14:26.589 "trsvcid": "33660" 00:14:26.589 }, 00:14:26.589 "auth": { 00:14:26.589 "state": "completed", 00:14:26.589 "digest": "sha256", 00:14:26.589 "dhgroup": "null" 00:14:26.589 } 00:14:26.589 } 00:14:26.589 ]' 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.589 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.846 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:26.846 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.846 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.846 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.846 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.104 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.035 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.292 00:14:28.549 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.549 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.549 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.806 { 00:14:28.806 "cntlid": 9, 00:14:28.806 "qid": 0, 00:14:28.806 "state": "enabled", 00:14:28.806 "thread": "nvmf_tgt_poll_group_000", 00:14:28.806 "listen_address": { 00:14:28.806 "trtype": "TCP", 00:14:28.806 "adrfam": "IPv4", 00:14:28.806 "traddr": "10.0.0.2", 00:14:28.806 "trsvcid": "4420" 00:14:28.806 }, 00:14:28.806 "peer_address": { 00:14:28.806 "trtype": "TCP", 00:14:28.806 "adrfam": "IPv4", 00:14:28.806 "traddr": "10.0.0.1", 00:14:28.806 "trsvcid": "33696" 00:14:28.806 }, 00:14:28.806 "auth": { 00:14:28.806 "state": "completed", 00:14:28.806 "digest": "sha256", 00:14:28.806 "dhgroup": "ffdhe2048" 00:14:28.806 } 00:14:28.806 } 00:14:28.806 ]' 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.806 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.063 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:29.994 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.252 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.252 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.509 00:14:30.509 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.509 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.509 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.795 { 00:14:30.795 "cntlid": 11, 00:14:30.795 "qid": 0, 00:14:30.795 "state": "enabled", 00:14:30.795 "thread": "nvmf_tgt_poll_group_000", 00:14:30.795 "listen_address": { 00:14:30.795 "trtype": "TCP", 00:14:30.795 "adrfam": "IPv4", 00:14:30.795 "traddr": "10.0.0.2", 00:14:30.795 "trsvcid": "4420" 00:14:30.795 }, 00:14:30.795 "peer_address": { 00:14:30.795 "trtype": "TCP", 00:14:30.795 "adrfam": "IPv4", 00:14:30.795 "traddr": "10.0.0.1", 00:14:30.795 "trsvcid": "53092" 00:14:30.795 }, 00:14:30.795 "auth": { 00:14:30.795 "state": "completed", 00:14:30.795 "digest": "sha256", 00:14:30.795 "dhgroup": "ffdhe2048" 00:14:30.795 } 00:14:30.795 } 00:14:30.795 ]' 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.795 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.062 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.994 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.252 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.509 00:14:32.509 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.509 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.509 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.767 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.767 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.767 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.767 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.767 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.767 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.767 { 00:14:32.767 "cntlid": 13, 00:14:32.767 "qid": 0, 00:14:32.767 "state": "enabled", 00:14:32.767 "thread": "nvmf_tgt_poll_group_000", 00:14:32.767 "listen_address": { 00:14:32.767 "trtype": "TCP", 00:14:32.767 "adrfam": "IPv4", 00:14:32.767 "traddr": "10.0.0.2", 00:14:32.767 "trsvcid": "4420" 00:14:32.767 }, 00:14:32.767 "peer_address": { 00:14:32.767 "trtype": "TCP", 00:14:32.767 "adrfam": "IPv4", 00:14:32.767 "traddr": "10.0.0.1", 00:14:32.767 "trsvcid": "53116" 00:14:32.767 }, 00:14:32.767 "auth": { 00:14:32.767 "state": "completed", 00:14:32.767 "digest": "sha256", 00:14:32.767 "dhgroup": "ffdhe2048" 00:14:32.767 } 00:14:32.767 } 00:14:32.767 ]' 00:14:32.767 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.024 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.024 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.024 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:33.024 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.024 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.024 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.024 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.282 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:34.215 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.473 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.730 00:14:34.730 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.730 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.730 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.988 { 00:14:34.988 "cntlid": 15, 00:14:34.988 "qid": 0, 00:14:34.988 "state": "enabled", 00:14:34.988 "thread": "nvmf_tgt_poll_group_000", 00:14:34.988 "listen_address": { 00:14:34.988 "trtype": "TCP", 00:14:34.988 "adrfam": "IPv4", 00:14:34.988 "traddr": "10.0.0.2", 00:14:34.988 "trsvcid": "4420" 00:14:34.988 }, 00:14:34.988 "peer_address": { 00:14:34.988 "trtype": "TCP", 00:14:34.988 "adrfam": "IPv4", 00:14:34.988 "traddr": "10.0.0.1", 00:14:34.988 "trsvcid": "53142" 00:14:34.988 }, 00:14:34.988 "auth": { 00:14:34.988 "state": "completed", 00:14:34.988 "digest": "sha256", 00:14:34.988 "dhgroup": "ffdhe2048" 00:14:34.988 } 00:14:34.988 } 00:14:34.988 ]' 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.988 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.245 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:35.245 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.245 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.245 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.245 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.504 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.438 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.004 00:14:37.004 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.004 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.004 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.004 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.004 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.004 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.004 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.262 { 00:14:37.262 "cntlid": 17, 00:14:37.262 "qid": 0, 00:14:37.262 "state": "enabled", 00:14:37.262 "thread": "nvmf_tgt_poll_group_000", 00:14:37.262 "listen_address": { 00:14:37.262 "trtype": "TCP", 00:14:37.262 "adrfam": "IPv4", 00:14:37.262 "traddr": "10.0.0.2", 00:14:37.262 "trsvcid": "4420" 00:14:37.262 }, 00:14:37.262 "peer_address": { 00:14:37.262 "trtype": "TCP", 00:14:37.262 "adrfam": "IPv4", 00:14:37.262 "traddr": "10.0.0.1", 00:14:37.262 "trsvcid": "53182" 00:14:37.262 }, 00:14:37.262 "auth": { 00:14:37.262 "state": "completed", 00:14:37.262 "digest": "sha256", 00:14:37.262 "dhgroup": "ffdhe3072" 00:14:37.262 } 00:14:37.262 } 00:14:37.262 ]' 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.262 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.519 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.450 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.707 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:38.707 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.707 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.707 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.708 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.273 00:14:39.273 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.273 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.273 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.273 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.273 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.273 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.273 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.273 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.273 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.273 { 00:14:39.273 "cntlid": 19, 00:14:39.273 "qid": 0, 00:14:39.273 "state": "enabled", 00:14:39.273 "thread": "nvmf_tgt_poll_group_000", 00:14:39.273 "listen_address": { 00:14:39.273 "trtype": "TCP", 00:14:39.273 "adrfam": "IPv4", 00:14:39.273 "traddr": "10.0.0.2", 00:14:39.273 "trsvcid": "4420" 00:14:39.273 }, 00:14:39.273 "peer_address": { 00:14:39.273 "trtype": "TCP", 00:14:39.273 "adrfam": "IPv4", 00:14:39.273 "traddr": "10.0.0.1", 00:14:39.273 "trsvcid": "53210" 00:14:39.273 }, 00:14:39.273 "auth": { 00:14:39.273 "state": "completed", 00:14:39.273 "digest": "sha256", 00:14:39.273 "dhgroup": "ffdhe3072" 00:14:39.273 } 00:14:39.273 } 00:14:39.273 ]' 00:14:39.273 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.531 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.531 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.531 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.531 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.531 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.531 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.531 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.789 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:14:40.720 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.721 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.721 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.721 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.721 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.721 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.721 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.721 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.978 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.235 00:14:41.235 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.235 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.235 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.493 { 00:14:41.493 "cntlid": 21, 00:14:41.493 "qid": 0, 00:14:41.493 "state": "enabled", 00:14:41.493 "thread": "nvmf_tgt_poll_group_000", 00:14:41.493 "listen_address": { 00:14:41.493 "trtype": "TCP", 00:14:41.493 "adrfam": "IPv4", 00:14:41.493 "traddr": "10.0.0.2", 00:14:41.493 "trsvcid": "4420" 00:14:41.493 }, 00:14:41.493 "peer_address": { 00:14:41.493 "trtype": "TCP", 00:14:41.493 "adrfam": "IPv4", 00:14:41.493 "traddr": "10.0.0.1", 00:14:41.493 "trsvcid": "48248" 00:14:41.493 }, 00:14:41.493 "auth": { 00:14:41.493 "state": "completed", 00:14:41.493 "digest": "sha256", 00:14:41.493 "dhgroup": "ffdhe3072" 00:14:41.493 } 00:14:41.493 } 00:14:41.493 ]' 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.493 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.750 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:41.750 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.750 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.750 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.750 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.006 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:42.933 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.189 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.446 00:14:43.446 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.446 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.446 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.703 { 00:14:43.703 "cntlid": 23, 00:14:43.703 "qid": 0, 00:14:43.703 "state": "enabled", 00:14:43.703 "thread": "nvmf_tgt_poll_group_000", 00:14:43.703 "listen_address": { 00:14:43.703 "trtype": "TCP", 00:14:43.703 "adrfam": "IPv4", 00:14:43.703 "traddr": "10.0.0.2", 00:14:43.703 "trsvcid": "4420" 00:14:43.703 }, 00:14:43.703 "peer_address": { 00:14:43.703 "trtype": "TCP", 00:14:43.703 "adrfam": "IPv4", 00:14:43.703 "traddr": "10.0.0.1", 00:14:43.703 "trsvcid": "48286" 00:14:43.703 }, 00:14:43.703 "auth": { 00:14:43.703 "state": "completed", 00:14:43.703 "digest": "sha256", 00:14:43.703 "dhgroup": "ffdhe3072" 00:14:43.703 } 00:14:43.703 } 00:14:43.703 ]' 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.703 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.959 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.890 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.150 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.150 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.150 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.413 00:14:45.413 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.413 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.413 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.673 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.673 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.673 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.673 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.673 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.673 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.673 { 00:14:45.673 "cntlid": 25, 00:14:45.673 "qid": 0, 00:14:45.673 "state": "enabled", 00:14:45.673 "thread": "nvmf_tgt_poll_group_000", 00:14:45.673 "listen_address": { 00:14:45.673 "trtype": "TCP", 00:14:45.673 "adrfam": "IPv4", 00:14:45.673 "traddr": "10.0.0.2", 00:14:45.673 "trsvcid": "4420" 00:14:45.673 }, 00:14:45.673 "peer_address": { 00:14:45.673 "trtype": "TCP", 00:14:45.673 "adrfam": "IPv4", 00:14:45.673 "traddr": "10.0.0.1", 00:14:45.673 "trsvcid": "48324" 00:14:45.673 }, 00:14:45.673 "auth": { 00:14:45.673 "state": "completed", 00:14:45.673 "digest": "sha256", 00:14:45.673 "dhgroup": "ffdhe4096" 00:14:45.673 } 00:14:45.673 } 00:14:45.673 ]' 00:14:45.673 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.930 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.930 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.930 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.930 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.930 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.930 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.930 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.188 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.144 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.405 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.661 00:14:47.661 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.661 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.661 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.918 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.918 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.918 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.918 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.174 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.174 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.174 { 00:14:48.174 "cntlid": 27, 00:14:48.175 "qid": 0, 00:14:48.175 "state": "enabled", 00:14:48.175 "thread": "nvmf_tgt_poll_group_000", 00:14:48.175 "listen_address": { 00:14:48.175 "trtype": "TCP", 00:14:48.175 "adrfam": "IPv4", 00:14:48.175 "traddr": "10.0.0.2", 00:14:48.175 "trsvcid": "4420" 00:14:48.175 }, 00:14:48.175 "peer_address": { 00:14:48.175 "trtype": "TCP", 00:14:48.175 "adrfam": "IPv4", 00:14:48.175 "traddr": "10.0.0.1", 00:14:48.175 "trsvcid": "48338" 00:14:48.175 }, 00:14:48.175 "auth": { 00:14:48.175 "state": "completed", 00:14:48.175 "digest": "sha256", 00:14:48.175 "dhgroup": "ffdhe4096" 00:14:48.175 } 00:14:48.175 } 00:14:48.175 ]' 00:14:48.175 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.175 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.175 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.175 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.175 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.175 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.175 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.175 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.432 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.363 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.621 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.878 00:14:49.878 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.878 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.878 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.135 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.135 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.135 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.135 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.135 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.135 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.135 { 00:14:50.136 "cntlid": 29, 00:14:50.136 "qid": 0, 00:14:50.136 "state": "enabled", 00:14:50.136 "thread": "nvmf_tgt_poll_group_000", 00:14:50.136 "listen_address": { 00:14:50.136 "trtype": "TCP", 00:14:50.136 "adrfam": "IPv4", 00:14:50.136 "traddr": "10.0.0.2", 00:14:50.136 "trsvcid": "4420" 00:14:50.136 }, 00:14:50.136 "peer_address": { 00:14:50.136 "trtype": "TCP", 00:14:50.136 "adrfam": "IPv4", 00:14:50.136 "traddr": "10.0.0.1", 00:14:50.136 "trsvcid": "57150" 00:14:50.136 }, 00:14:50.136 "auth": { 00:14:50.136 "state": "completed", 00:14:50.136 "digest": "sha256", 00:14:50.136 "dhgroup": "ffdhe4096" 00:14:50.136 } 00:14:50.136 } 00:14:50.136 ]' 00:14:50.136 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.136 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.393 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.393 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.393 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.393 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.393 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.393 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.651 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:51.582 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.840 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.098 00:14:52.098 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.098 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.098 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.355 { 00:14:52.355 "cntlid": 31, 00:14:52.355 "qid": 0, 00:14:52.355 "state": "enabled", 00:14:52.355 "thread": "nvmf_tgt_poll_group_000", 00:14:52.355 "listen_address": { 00:14:52.355 "trtype": "TCP", 00:14:52.355 "adrfam": "IPv4", 00:14:52.355 "traddr": "10.0.0.2", 00:14:52.355 "trsvcid": "4420" 00:14:52.355 }, 00:14:52.355 "peer_address": { 00:14:52.355 "trtype": "TCP", 00:14:52.355 "adrfam": "IPv4", 00:14:52.355 "traddr": "10.0.0.1", 00:14:52.355 "trsvcid": "57176" 00:14:52.355 }, 00:14:52.355 "auth": { 00:14:52.355 "state": "completed", 00:14:52.355 "digest": "sha256", 00:14:52.355 "dhgroup": "ffdhe4096" 00:14:52.355 } 00:14:52.355 } 00:14:52.355 ]' 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.355 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.612 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.612 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.612 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.869 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.818 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.819 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.819 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.819 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.819 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.819 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.382 00:14:54.382 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.382 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.382 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.639 { 00:14:54.639 "cntlid": 33, 00:14:54.639 "qid": 0, 00:14:54.639 "state": "enabled", 00:14:54.639 "thread": "nvmf_tgt_poll_group_000", 00:14:54.639 "listen_address": { 00:14:54.639 "trtype": "TCP", 00:14:54.639 "adrfam": "IPv4", 00:14:54.639 "traddr": "10.0.0.2", 00:14:54.639 "trsvcid": "4420" 00:14:54.639 }, 00:14:54.639 "peer_address": { 00:14:54.639 "trtype": "TCP", 00:14:54.639 "adrfam": "IPv4", 00:14:54.639 "traddr": "10.0.0.1", 00:14:54.639 "trsvcid": "57208" 00:14:54.639 }, 00:14:54.639 "auth": { 00:14:54.639 "state": "completed", 00:14:54.639 "digest": "sha256", 00:14:54.639 "dhgroup": "ffdhe6144" 00:14:54.639 } 00:14:54.639 } 00:14:54.639 ]' 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.639 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.896 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.896 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.896 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.896 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.896 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.153 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.084 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.341 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.906 00:14:56.906 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.906 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.906 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.163 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.163 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.163 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.163 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.164 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.164 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.164 { 00:14:57.164 "cntlid": 35, 00:14:57.164 "qid": 0, 00:14:57.164 "state": "enabled", 00:14:57.164 "thread": "nvmf_tgt_poll_group_000", 00:14:57.164 "listen_address": { 00:14:57.164 "trtype": "TCP", 00:14:57.164 "adrfam": "IPv4", 00:14:57.164 "traddr": "10.0.0.2", 00:14:57.164 "trsvcid": "4420" 00:14:57.164 }, 00:14:57.164 "peer_address": { 00:14:57.164 "trtype": "TCP", 00:14:57.164 "adrfam": "IPv4", 00:14:57.164 "traddr": "10.0.0.1", 00:14:57.164 "trsvcid": "57224" 00:14:57.164 }, 00:14:57.164 "auth": { 00:14:57.164 "state": "completed", 00:14:57.164 "digest": "sha256", 00:14:57.164 "dhgroup": "ffdhe6144" 00:14:57.164 } 00:14:57.164 } 00:14:57.164 ]' 00:14:57.164 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.164 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.164 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.164 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.164 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.164 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.164 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.164 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.422 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:58.355 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.613 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.178 00:14:59.178 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.178 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.178 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.435 { 00:14:59.435 "cntlid": 37, 00:14:59.435 "qid": 0, 00:14:59.435 "state": "enabled", 00:14:59.435 "thread": "nvmf_tgt_poll_group_000", 00:14:59.435 "listen_address": { 00:14:59.435 "trtype": "TCP", 00:14:59.435 "adrfam": "IPv4", 00:14:59.435 "traddr": "10.0.0.2", 00:14:59.435 "trsvcid": "4420" 00:14:59.435 }, 00:14:59.435 "peer_address": { 00:14:59.435 "trtype": "TCP", 00:14:59.435 "adrfam": "IPv4", 00:14:59.435 "traddr": "10.0.0.1", 00:14:59.435 "trsvcid": "57242" 00:14:59.435 }, 00:14:59.435 "auth": { 00:14:59.435 "state": "completed", 00:14:59.435 "digest": "sha256", 00:14:59.435 "dhgroup": "ffdhe6144" 00:14:59.435 } 00:14:59.435 } 00:14:59.435 ]' 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.435 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.693 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:00.624 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:00.881 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.445 00:15:01.445 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.445 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.445 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.703 { 00:15:01.703 "cntlid": 39, 00:15:01.703 "qid": 0, 00:15:01.703 "state": "enabled", 00:15:01.703 "thread": "nvmf_tgt_poll_group_000", 00:15:01.703 "listen_address": { 00:15:01.703 "trtype": "TCP", 00:15:01.703 "adrfam": "IPv4", 00:15:01.703 "traddr": "10.0.0.2", 00:15:01.703 "trsvcid": "4420" 00:15:01.703 }, 00:15:01.703 "peer_address": { 00:15:01.703 "trtype": "TCP", 00:15:01.703 "adrfam": "IPv4", 00:15:01.703 "traddr": "10.0.0.1", 00:15:01.703 "trsvcid": "53492" 00:15:01.703 }, 00:15:01.703 "auth": { 00:15:01.703 "state": "completed", 00:15:01.703 "digest": "sha256", 00:15:01.703 "dhgroup": "ffdhe6144" 00:15:01.703 } 00:15:01.703 } 00:15:01.703 ]' 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.703 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.960 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.960 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.960 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.229 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.206 01:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.206 01:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.463 01:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.463 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.463 01:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.398 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.398 { 00:15:04.398 "cntlid": 41, 00:15:04.398 "qid": 0, 00:15:04.398 "state": "enabled", 00:15:04.398 "thread": "nvmf_tgt_poll_group_000", 00:15:04.398 "listen_address": { 00:15:04.398 "trtype": "TCP", 00:15:04.398 "adrfam": "IPv4", 00:15:04.398 "traddr": "10.0.0.2", 00:15:04.398 "trsvcid": "4420" 00:15:04.398 }, 00:15:04.398 "peer_address": { 00:15:04.398 "trtype": "TCP", 00:15:04.398 "adrfam": "IPv4", 00:15:04.398 "traddr": "10.0.0.1", 00:15:04.398 "trsvcid": "53520" 00:15:04.398 }, 00:15:04.398 "auth": { 00:15:04.398 "state": "completed", 00:15:04.398 "digest": "sha256", 00:15:04.398 "dhgroup": "ffdhe8192" 00:15:04.398 } 00:15:04.398 } 00:15:04.398 ]' 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.398 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.656 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:04.656 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.656 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.656 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.656 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.913 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:05.846 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.103 01:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.035 00:15:07.035 01:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.035 01:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.035 01:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.292 { 00:15:07.292 "cntlid": 43, 00:15:07.292 "qid": 0, 00:15:07.292 "state": "enabled", 00:15:07.292 "thread": "nvmf_tgt_poll_group_000", 00:15:07.292 "listen_address": { 00:15:07.292 "trtype": "TCP", 00:15:07.292 "adrfam": "IPv4", 00:15:07.292 "traddr": "10.0.0.2", 00:15:07.292 "trsvcid": "4420" 00:15:07.292 }, 00:15:07.292 "peer_address": { 00:15:07.292 "trtype": "TCP", 00:15:07.292 "adrfam": "IPv4", 00:15:07.292 "traddr": "10.0.0.1", 00:15:07.292 "trsvcid": "53546" 00:15:07.292 }, 00:15:07.292 "auth": { 00:15:07.292 "state": "completed", 00:15:07.292 "digest": "sha256", 00:15:07.292 "dhgroup": "ffdhe8192" 00:15:07.292 } 00:15:07.292 } 00:15:07.292 ]' 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.292 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.549 01:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:15:08.480 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.480 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.480 01:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.480 01:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.480 01:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.481 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.481 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:08.481 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.738 01:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.670 00:15:09.670 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.670 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.671 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.928 { 00:15:09.928 "cntlid": 45, 00:15:09.928 "qid": 0, 00:15:09.928 "state": "enabled", 00:15:09.928 "thread": "nvmf_tgt_poll_group_000", 00:15:09.928 "listen_address": { 00:15:09.928 "trtype": "TCP", 00:15:09.928 "adrfam": "IPv4", 00:15:09.928 "traddr": "10.0.0.2", 00:15:09.928 "trsvcid": "4420" 00:15:09.928 }, 00:15:09.928 "peer_address": { 00:15:09.928 "trtype": "TCP", 00:15:09.928 "adrfam": "IPv4", 00:15:09.928 "traddr": "10.0.0.1", 00:15:09.928 "trsvcid": "53566" 00:15:09.928 }, 00:15:09.928 "auth": { 00:15:09.928 "state": "completed", 00:15:09.928 "digest": "sha256", 00:15:09.928 "dhgroup": "ffdhe8192" 00:15:09.928 } 00:15:09.928 } 00:15:09.928 ]' 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.928 01:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.186 01:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:11.117 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.374 01:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.302 00:15:12.302 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.302 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.302 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.561 { 00:15:12.561 "cntlid": 47, 00:15:12.561 "qid": 0, 00:15:12.561 "state": "enabled", 00:15:12.561 "thread": "nvmf_tgt_poll_group_000", 00:15:12.561 "listen_address": { 00:15:12.561 "trtype": "TCP", 00:15:12.561 "adrfam": "IPv4", 00:15:12.561 "traddr": "10.0.0.2", 00:15:12.561 "trsvcid": "4420" 00:15:12.561 }, 00:15:12.561 "peer_address": { 00:15:12.561 "trtype": "TCP", 00:15:12.561 "adrfam": "IPv4", 00:15:12.561 "traddr": "10.0.0.1", 00:15:12.561 "trsvcid": "38840" 00:15:12.561 }, 00:15:12.561 "auth": { 00:15:12.561 "state": "completed", 00:15:12.561 "digest": "sha256", 00:15:12.561 "dhgroup": "ffdhe8192" 00:15:12.561 } 00:15:12.561 } 00:15:12.561 ]' 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.561 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.817 01:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.748 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.006 01:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.263 00:15:14.263 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.263 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.264 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.521 { 00:15:14.521 "cntlid": 49, 00:15:14.521 "qid": 0, 00:15:14.521 "state": "enabled", 00:15:14.521 "thread": "nvmf_tgt_poll_group_000", 00:15:14.521 "listen_address": { 00:15:14.521 "trtype": "TCP", 00:15:14.521 "adrfam": "IPv4", 00:15:14.521 "traddr": "10.0.0.2", 00:15:14.521 "trsvcid": "4420" 00:15:14.521 }, 00:15:14.521 "peer_address": { 00:15:14.521 "trtype": "TCP", 00:15:14.521 "adrfam": "IPv4", 00:15:14.521 "traddr": "10.0.0.1", 00:15:14.521 "trsvcid": "38848" 00:15:14.521 }, 00:15:14.521 "auth": { 00:15:14.521 "state": "completed", 00:15:14.521 "digest": "sha384", 00:15:14.521 "dhgroup": "null" 00:15:14.521 } 00:15:14.521 } 00:15:14.521 ]' 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.521 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.778 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.778 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.778 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.778 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.778 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.036 01:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.970 01:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.228 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.228 01:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.485 00:15:16.485 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.485 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.485 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.742 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.742 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.742 01:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.742 01:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.742 01:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.742 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.742 { 00:15:16.742 "cntlid": 51, 00:15:16.742 "qid": 0, 00:15:16.742 "state": "enabled", 00:15:16.743 "thread": "nvmf_tgt_poll_group_000", 00:15:16.743 "listen_address": { 00:15:16.743 "trtype": "TCP", 00:15:16.743 "adrfam": "IPv4", 00:15:16.743 "traddr": "10.0.0.2", 00:15:16.743 "trsvcid": "4420" 00:15:16.743 }, 00:15:16.743 "peer_address": { 00:15:16.743 "trtype": "TCP", 00:15:16.743 "adrfam": "IPv4", 00:15:16.743 "traddr": "10.0.0.1", 00:15:16.743 "trsvcid": "38866" 00:15:16.743 }, 00:15:16.743 "auth": { 00:15:16.743 "state": "completed", 00:15:16.743 "digest": "sha384", 00:15:16.743 "dhgroup": "null" 00:15:16.743 } 00:15:16.743 } 00:15:16.743 ]' 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.743 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.000 01:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:17.929 01:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.187 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.445 00:15:18.445 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.445 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.445 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.703 { 00:15:18.703 "cntlid": 53, 00:15:18.703 "qid": 0, 00:15:18.703 "state": "enabled", 00:15:18.703 "thread": "nvmf_tgt_poll_group_000", 00:15:18.703 "listen_address": { 00:15:18.703 "trtype": "TCP", 00:15:18.703 "adrfam": "IPv4", 00:15:18.703 "traddr": "10.0.0.2", 00:15:18.703 "trsvcid": "4420" 00:15:18.703 }, 00:15:18.703 "peer_address": { 00:15:18.703 "trtype": "TCP", 00:15:18.703 "adrfam": "IPv4", 00:15:18.703 "traddr": "10.0.0.1", 00:15:18.703 "trsvcid": "38896" 00:15:18.703 }, 00:15:18.703 "auth": { 00:15:18.703 "state": "completed", 00:15:18.703 "digest": "sha384", 00:15:18.703 "dhgroup": "null" 00:15:18.703 } 00:15:18.703 } 00:15:18.703 ]' 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:18.703 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.960 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.960 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.960 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.217 01:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:20.185 01:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:20.185 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:20.185 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.185 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.186 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.443 00:15:20.443 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.443 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.443 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.700 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.700 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.700 01:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.700 01:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.700 01:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.700 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.700 { 00:15:20.700 "cntlid": 55, 00:15:20.700 "qid": 0, 00:15:20.700 "state": "enabled", 00:15:20.700 "thread": "nvmf_tgt_poll_group_000", 00:15:20.700 "listen_address": { 00:15:20.700 "trtype": "TCP", 00:15:20.700 "adrfam": "IPv4", 00:15:20.700 "traddr": "10.0.0.2", 00:15:20.700 "trsvcid": "4420" 00:15:20.700 }, 00:15:20.700 "peer_address": { 00:15:20.700 "trtype": "TCP", 00:15:20.700 "adrfam": "IPv4", 00:15:20.700 "traddr": "10.0.0.1", 00:15:20.700 "trsvcid": "45406" 00:15:20.700 }, 00:15:20.700 "auth": { 00:15:20.700 "state": "completed", 00:15:20.700 "digest": "sha384", 00:15:20.700 "dhgroup": "null" 00:15:20.700 } 00:15:20.700 } 00:15:20.700 ]' 00:15:20.700 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.957 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.957 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.957 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:20.957 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.957 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.957 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.957 01:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.214 01:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.146 01:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.403 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:22.403 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.403 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.403 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.403 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.403 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.404 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.404 01:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.404 01:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.404 01:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.404 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.404 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.661 00:15:22.661 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.661 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.661 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.918 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.919 { 00:15:22.919 "cntlid": 57, 00:15:22.919 "qid": 0, 00:15:22.919 "state": "enabled", 00:15:22.919 "thread": "nvmf_tgt_poll_group_000", 00:15:22.919 "listen_address": { 00:15:22.919 "trtype": "TCP", 00:15:22.919 "adrfam": "IPv4", 00:15:22.919 "traddr": "10.0.0.2", 00:15:22.919 "trsvcid": "4420" 00:15:22.919 }, 00:15:22.919 "peer_address": { 00:15:22.919 "trtype": "TCP", 00:15:22.919 "adrfam": "IPv4", 00:15:22.919 "traddr": "10.0.0.1", 00:15:22.919 "trsvcid": "45442" 00:15:22.919 }, 00:15:22.919 "auth": { 00:15:22.919 "state": "completed", 00:15:22.919 "digest": "sha384", 00:15:22.919 "dhgroup": "ffdhe2048" 00:15:22.919 } 00:15:22.919 } 00:15:22.919 ]' 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.919 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.175 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.175 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.175 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.175 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.175 01:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.433 01:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.367 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.625 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.883 00:15:24.883 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.883 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.883 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.141 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.141 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.141 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.141 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.141 01:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.141 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.141 { 00:15:25.141 "cntlid": 59, 00:15:25.141 "qid": 0, 00:15:25.141 "state": "enabled", 00:15:25.141 "thread": "nvmf_tgt_poll_group_000", 00:15:25.141 "listen_address": { 00:15:25.141 "trtype": "TCP", 00:15:25.141 "adrfam": "IPv4", 00:15:25.141 "traddr": "10.0.0.2", 00:15:25.141 "trsvcid": "4420" 00:15:25.141 }, 00:15:25.141 "peer_address": { 00:15:25.141 "trtype": "TCP", 00:15:25.141 "adrfam": "IPv4", 00:15:25.141 "traddr": "10.0.0.1", 00:15:25.141 "trsvcid": "45480" 00:15:25.141 }, 00:15:25.141 "auth": { 00:15:25.141 "state": "completed", 00:15:25.141 "digest": "sha384", 00:15:25.141 "dhgroup": "ffdhe2048" 00:15:25.141 } 00:15:25.141 } 00:15:25.141 ]' 00:15:25.141 01:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.141 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.141 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.141 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.141 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.141 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.141 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.141 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.399 01:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:26.332 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.590 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.847 00:15:26.847 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.847 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.847 01:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.104 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.104 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.104 01:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.104 01:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.104 01:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.104 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.104 { 00:15:27.104 "cntlid": 61, 00:15:27.104 "qid": 0, 00:15:27.104 "state": "enabled", 00:15:27.104 "thread": "nvmf_tgt_poll_group_000", 00:15:27.104 "listen_address": { 00:15:27.104 "trtype": "TCP", 00:15:27.104 "adrfam": "IPv4", 00:15:27.104 "traddr": "10.0.0.2", 00:15:27.104 "trsvcid": "4420" 00:15:27.104 }, 00:15:27.104 "peer_address": { 00:15:27.104 "trtype": "TCP", 00:15:27.104 "adrfam": "IPv4", 00:15:27.104 "traddr": "10.0.0.1", 00:15:27.104 "trsvcid": "45500" 00:15:27.104 }, 00:15:27.104 "auth": { 00:15:27.104 "state": "completed", 00:15:27.104 "digest": "sha384", 00:15:27.104 "dhgroup": "ffdhe2048" 00:15:27.104 } 00:15:27.104 } 00:15:27.104 ]' 00:15:27.104 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.359 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.359 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.360 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:27.360 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.360 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.360 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.360 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.615 01:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.545 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.802 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.059 00:15:29.059 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.059 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.059 01:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.316 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.316 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.316 01:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.316 01:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.316 01:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.316 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.316 { 00:15:29.316 "cntlid": 63, 00:15:29.316 "qid": 0, 00:15:29.316 "state": "enabled", 00:15:29.316 "thread": "nvmf_tgt_poll_group_000", 00:15:29.316 "listen_address": { 00:15:29.316 "trtype": "TCP", 00:15:29.316 "adrfam": "IPv4", 00:15:29.316 "traddr": "10.0.0.2", 00:15:29.316 "trsvcid": "4420" 00:15:29.316 }, 00:15:29.316 "peer_address": { 00:15:29.316 "trtype": "TCP", 00:15:29.316 "adrfam": "IPv4", 00:15:29.316 "traddr": "10.0.0.1", 00:15:29.316 "trsvcid": "45520" 00:15:29.316 }, 00:15:29.316 "auth": { 00:15:29.316 "state": "completed", 00:15:29.316 "digest": "sha384", 00:15:29.316 "dhgroup": "ffdhe2048" 00:15:29.316 } 00:15:29.316 } 00:15:29.317 ]' 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.317 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.574 01:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.505 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.763 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.022 00:15:31.022 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.022 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.022 01:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.281 { 00:15:31.281 "cntlid": 65, 00:15:31.281 "qid": 0, 00:15:31.281 "state": "enabled", 00:15:31.281 "thread": "nvmf_tgt_poll_group_000", 00:15:31.281 "listen_address": { 00:15:31.281 "trtype": "TCP", 00:15:31.281 "adrfam": "IPv4", 00:15:31.281 "traddr": "10.0.0.2", 00:15:31.281 "trsvcid": "4420" 00:15:31.281 }, 00:15:31.281 "peer_address": { 00:15:31.281 "trtype": "TCP", 00:15:31.281 "adrfam": "IPv4", 00:15:31.281 "traddr": "10.0.0.1", 00:15:31.281 "trsvcid": "39894" 00:15:31.281 }, 00:15:31.281 "auth": { 00:15:31.281 "state": "completed", 00:15:31.281 "digest": "sha384", 00:15:31.281 "dhgroup": "ffdhe3072" 00:15:31.281 } 00:15:31.281 } 00:15:31.281 ]' 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.281 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.539 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:31.539 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.539 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.539 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.539 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.797 01:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.730 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.988 01:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.246 00:15:33.246 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.246 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.246 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.504 { 00:15:33.504 "cntlid": 67, 00:15:33.504 "qid": 0, 00:15:33.504 "state": "enabled", 00:15:33.504 "thread": "nvmf_tgt_poll_group_000", 00:15:33.504 "listen_address": { 00:15:33.504 "trtype": "TCP", 00:15:33.504 "adrfam": "IPv4", 00:15:33.504 "traddr": "10.0.0.2", 00:15:33.504 "trsvcid": "4420" 00:15:33.504 }, 00:15:33.504 "peer_address": { 00:15:33.504 "trtype": "TCP", 00:15:33.504 "adrfam": "IPv4", 00:15:33.504 "traddr": "10.0.0.1", 00:15:33.504 "trsvcid": "39912" 00:15:33.504 }, 00:15:33.504 "auth": { 00:15:33.504 "state": "completed", 00:15:33.504 "digest": "sha384", 00:15:33.504 "dhgroup": "ffdhe3072" 00:15:33.504 } 00:15:33.504 } 00:15:33.504 ]' 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.504 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.762 01:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:34.694 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.952 01:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.209 00:15:35.467 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.467 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.467 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.467 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.467 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.467 01:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.467 01:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.752 { 00:15:35.752 "cntlid": 69, 00:15:35.752 "qid": 0, 00:15:35.752 "state": "enabled", 00:15:35.752 "thread": "nvmf_tgt_poll_group_000", 00:15:35.752 "listen_address": { 00:15:35.752 "trtype": "TCP", 00:15:35.752 "adrfam": "IPv4", 00:15:35.752 "traddr": "10.0.0.2", 00:15:35.752 "trsvcid": "4420" 00:15:35.752 }, 00:15:35.752 "peer_address": { 00:15:35.752 "trtype": "TCP", 00:15:35.752 "adrfam": "IPv4", 00:15:35.752 "traddr": "10.0.0.1", 00:15:35.752 "trsvcid": "39926" 00:15:35.752 }, 00:15:35.752 "auth": { 00:15:35.752 "state": "completed", 00:15:35.752 "digest": "sha384", 00:15:35.752 "dhgroup": "ffdhe3072" 00:15:35.752 } 00:15:35.752 } 00:15:35.752 ]' 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.752 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.008 01:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.940 01:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.198 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.455 00:15:37.455 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.455 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.455 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.719 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.719 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.719 01:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.719 01:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.719 01:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.719 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.719 { 00:15:37.719 "cntlid": 71, 00:15:37.719 "qid": 0, 00:15:37.719 "state": "enabled", 00:15:37.719 "thread": "nvmf_tgt_poll_group_000", 00:15:37.719 "listen_address": { 00:15:37.719 "trtype": "TCP", 00:15:37.719 "adrfam": "IPv4", 00:15:37.719 "traddr": "10.0.0.2", 00:15:37.719 "trsvcid": "4420" 00:15:37.719 }, 00:15:37.719 "peer_address": { 00:15:37.719 "trtype": "TCP", 00:15:37.719 "adrfam": "IPv4", 00:15:37.720 "traddr": "10.0.0.1", 00:15:37.720 "trsvcid": "39958" 00:15:37.720 }, 00:15:37.720 "auth": { 00:15:37.720 "state": "completed", 00:15:37.720 "digest": "sha384", 00:15:37.720 "dhgroup": "ffdhe3072" 00:15:37.720 } 00:15:37.720 } 00:15:37.720 ]' 00:15:37.720 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.720 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.720 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.976 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.976 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.976 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.976 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.976 01:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.234 01:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.166 01:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.424 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.681 00:15:39.681 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.681 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.681 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.939 { 00:15:39.939 "cntlid": 73, 00:15:39.939 "qid": 0, 00:15:39.939 "state": "enabled", 00:15:39.939 "thread": "nvmf_tgt_poll_group_000", 00:15:39.939 "listen_address": { 00:15:39.939 "trtype": "TCP", 00:15:39.939 "adrfam": "IPv4", 00:15:39.939 "traddr": "10.0.0.2", 00:15:39.939 "trsvcid": "4420" 00:15:39.939 }, 00:15:39.939 "peer_address": { 00:15:39.939 "trtype": "TCP", 00:15:39.939 "adrfam": "IPv4", 00:15:39.939 "traddr": "10.0.0.1", 00:15:39.939 "trsvcid": "42816" 00:15:39.939 }, 00:15:39.939 "auth": { 00:15:39.939 "state": "completed", 00:15:39.939 "digest": "sha384", 00:15:39.939 "dhgroup": "ffdhe4096" 00:15:39.939 } 00:15:39.939 } 00:15:39.939 ]' 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.939 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.197 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.197 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.197 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.197 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.197 01:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.454 01:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.385 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.642 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.206 00:15:42.206 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.206 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.206 01:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.463 { 00:15:42.463 "cntlid": 75, 00:15:42.463 "qid": 0, 00:15:42.463 "state": "enabled", 00:15:42.463 "thread": "nvmf_tgt_poll_group_000", 00:15:42.463 "listen_address": { 00:15:42.463 "trtype": "TCP", 00:15:42.463 "adrfam": "IPv4", 00:15:42.463 "traddr": "10.0.0.2", 00:15:42.463 "trsvcid": "4420" 00:15:42.463 }, 00:15:42.463 "peer_address": { 00:15:42.463 "trtype": "TCP", 00:15:42.463 "adrfam": "IPv4", 00:15:42.463 "traddr": "10.0.0.1", 00:15:42.463 "trsvcid": "42840" 00:15:42.463 }, 00:15:42.463 "auth": { 00:15:42.463 "state": "completed", 00:15:42.463 "digest": "sha384", 00:15:42.463 "dhgroup": "ffdhe4096" 00:15:42.463 } 00:15:42.463 } 00:15:42.463 ]' 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.463 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.720 01:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:43.651 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.909 01:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.476 00:15:44.476 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.476 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.476 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.476 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.476 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.476 01:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.476 01:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.733 { 00:15:44.733 "cntlid": 77, 00:15:44.733 "qid": 0, 00:15:44.733 "state": "enabled", 00:15:44.733 "thread": "nvmf_tgt_poll_group_000", 00:15:44.733 "listen_address": { 00:15:44.733 "trtype": "TCP", 00:15:44.733 "adrfam": "IPv4", 00:15:44.733 "traddr": "10.0.0.2", 00:15:44.733 "trsvcid": "4420" 00:15:44.733 }, 00:15:44.733 "peer_address": { 00:15:44.733 "trtype": "TCP", 00:15:44.733 "adrfam": "IPv4", 00:15:44.733 "traddr": "10.0.0.1", 00:15:44.733 "trsvcid": "42860" 00:15:44.733 }, 00:15:44.733 "auth": { 00:15:44.733 "state": "completed", 00:15:44.733 "digest": "sha384", 00:15:44.733 "dhgroup": "ffdhe4096" 00:15:44.733 } 00:15:44.733 } 00:15:44.733 ]' 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.733 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.991 01:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:45.923 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.180 01:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.437 00:15:46.437 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.437 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.437 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.695 { 00:15:46.695 "cntlid": 79, 00:15:46.695 "qid": 0, 00:15:46.695 "state": "enabled", 00:15:46.695 "thread": "nvmf_tgt_poll_group_000", 00:15:46.695 "listen_address": { 00:15:46.695 "trtype": "TCP", 00:15:46.695 "adrfam": "IPv4", 00:15:46.695 "traddr": "10.0.0.2", 00:15:46.695 "trsvcid": "4420" 00:15:46.695 }, 00:15:46.695 "peer_address": { 00:15:46.695 "trtype": "TCP", 00:15:46.695 "adrfam": "IPv4", 00:15:46.695 "traddr": "10.0.0.1", 00:15:46.695 "trsvcid": "42884" 00:15:46.695 }, 00:15:46.695 "auth": { 00:15:46.695 "state": "completed", 00:15:46.695 "digest": "sha384", 00:15:46.695 "dhgroup": "ffdhe4096" 00:15:46.695 } 00:15:46.695 } 00:15:46.695 ]' 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:46.695 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.980 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.980 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.980 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.980 01:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:15:47.908 01:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.909 01:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.165 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.728 00:15:48.728 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.728 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.728 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.984 { 00:15:48.984 "cntlid": 81, 00:15:48.984 "qid": 0, 00:15:48.984 "state": "enabled", 00:15:48.984 "thread": "nvmf_tgt_poll_group_000", 00:15:48.984 "listen_address": { 00:15:48.984 "trtype": "TCP", 00:15:48.984 "adrfam": "IPv4", 00:15:48.984 "traddr": "10.0.0.2", 00:15:48.984 "trsvcid": "4420" 00:15:48.984 }, 00:15:48.984 "peer_address": { 00:15:48.984 "trtype": "TCP", 00:15:48.984 "adrfam": "IPv4", 00:15:48.984 "traddr": "10.0.0.1", 00:15:48.984 "trsvcid": "42900" 00:15:48.984 }, 00:15:48.984 "auth": { 00:15:48.984 "state": "completed", 00:15:48.984 "digest": "sha384", 00:15:48.984 "dhgroup": "ffdhe6144" 00:15:48.984 } 00:15:48.984 } 00:15:48.984 ]' 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.984 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.240 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.240 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.240 01:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.240 01:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.168 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.425 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:50.425 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.425 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.425 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:50.425 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.425 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.426 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.426 01:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.426 01:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.426 01:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.426 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.426 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.989 00:15:50.989 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.989 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.989 01:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.247 { 00:15:51.247 "cntlid": 83, 00:15:51.247 "qid": 0, 00:15:51.247 "state": "enabled", 00:15:51.247 "thread": "nvmf_tgt_poll_group_000", 00:15:51.247 "listen_address": { 00:15:51.247 "trtype": "TCP", 00:15:51.247 "adrfam": "IPv4", 00:15:51.247 "traddr": "10.0.0.2", 00:15:51.247 "trsvcid": "4420" 00:15:51.247 }, 00:15:51.247 "peer_address": { 00:15:51.247 "trtype": "TCP", 00:15:51.247 "adrfam": "IPv4", 00:15:51.247 "traddr": "10.0.0.1", 00:15:51.247 "trsvcid": "60266" 00:15:51.247 }, 00:15:51.247 "auth": { 00:15:51.247 "state": "completed", 00:15:51.247 "digest": "sha384", 00:15:51.247 "dhgroup": "ffdhe6144" 00:15:51.247 } 00:15:51.247 } 00:15:51.247 ]' 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.247 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.248 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.248 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.505 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.505 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.505 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.762 01:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:52.721 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.722 01:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.286 00:15:53.286 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.286 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.286 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.544 { 00:15:53.544 "cntlid": 85, 00:15:53.544 "qid": 0, 00:15:53.544 "state": "enabled", 00:15:53.544 "thread": "nvmf_tgt_poll_group_000", 00:15:53.544 "listen_address": { 00:15:53.544 "trtype": "TCP", 00:15:53.544 "adrfam": "IPv4", 00:15:53.544 "traddr": "10.0.0.2", 00:15:53.544 "trsvcid": "4420" 00:15:53.544 }, 00:15:53.544 "peer_address": { 00:15:53.544 "trtype": "TCP", 00:15:53.544 "adrfam": "IPv4", 00:15:53.544 "traddr": "10.0.0.1", 00:15:53.544 "trsvcid": "60298" 00:15:53.544 }, 00:15:53.544 "auth": { 00:15:53.544 "state": "completed", 00:15:53.544 "digest": "sha384", 00:15:53.544 "dhgroup": "ffdhe6144" 00:15:53.544 } 00:15:53.544 } 00:15:53.544 ]' 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.544 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.801 01:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.733 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.992 01:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.556 00:15:55.556 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.556 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.556 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.813 { 00:15:55.813 "cntlid": 87, 00:15:55.813 "qid": 0, 00:15:55.813 "state": "enabled", 00:15:55.813 "thread": "nvmf_tgt_poll_group_000", 00:15:55.813 "listen_address": { 00:15:55.813 "trtype": "TCP", 00:15:55.813 "adrfam": "IPv4", 00:15:55.813 "traddr": "10.0.0.2", 00:15:55.813 "trsvcid": "4420" 00:15:55.813 }, 00:15:55.813 "peer_address": { 00:15:55.813 "trtype": "TCP", 00:15:55.813 "adrfam": "IPv4", 00:15:55.813 "traddr": "10.0.0.1", 00:15:55.813 "trsvcid": "60326" 00:15:55.813 }, 00:15:55.813 "auth": { 00:15:55.813 "state": "completed", 00:15:55.813 "digest": "sha384", 00:15:55.813 "dhgroup": "ffdhe6144" 00:15:55.813 } 00:15:55.813 } 00:15:55.813 ]' 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.813 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.814 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.814 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.814 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.814 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.814 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.814 01:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.071 01:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.004 01:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.262 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.192 00:15:58.192 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.192 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.192 01:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.450 { 00:15:58.450 "cntlid": 89, 00:15:58.450 "qid": 0, 00:15:58.450 "state": "enabled", 00:15:58.450 "thread": "nvmf_tgt_poll_group_000", 00:15:58.450 "listen_address": { 00:15:58.450 "trtype": "TCP", 00:15:58.450 "adrfam": "IPv4", 00:15:58.450 "traddr": "10.0.0.2", 00:15:58.450 "trsvcid": "4420" 00:15:58.450 }, 00:15:58.450 "peer_address": { 00:15:58.450 "trtype": "TCP", 00:15:58.450 "adrfam": "IPv4", 00:15:58.450 "traddr": "10.0.0.1", 00:15:58.450 "trsvcid": "60346" 00:15:58.450 }, 00:15:58.450 "auth": { 00:15:58.450 "state": "completed", 00:15:58.450 "digest": "sha384", 00:15:58.450 "dhgroup": "ffdhe8192" 00:15:58.450 } 00:15:58.450 } 00:15:58.450 ]' 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.450 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.451 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.451 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.451 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.451 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.451 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.451 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.709 01:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.641 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.899 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.900 01:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.834 00:16:00.834 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.834 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.834 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.834 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.834 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.834 01:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.834 01:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.092 { 00:16:01.092 "cntlid": 91, 00:16:01.092 "qid": 0, 00:16:01.092 "state": "enabled", 00:16:01.092 "thread": "nvmf_tgt_poll_group_000", 00:16:01.092 "listen_address": { 00:16:01.092 "trtype": "TCP", 00:16:01.092 "adrfam": "IPv4", 00:16:01.092 "traddr": "10.0.0.2", 00:16:01.092 "trsvcid": "4420" 00:16:01.092 }, 00:16:01.092 "peer_address": { 00:16:01.092 "trtype": "TCP", 00:16:01.092 "adrfam": "IPv4", 00:16:01.092 "traddr": "10.0.0.1", 00:16:01.092 "trsvcid": "38166" 00:16:01.092 }, 00:16:01.092 "auth": { 00:16:01.092 "state": "completed", 00:16:01.092 "digest": "sha384", 00:16:01.092 "dhgroup": "ffdhe8192" 00:16:01.092 } 00:16:01.092 } 00:16:01.092 ]' 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.092 01:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.349 01:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:16:02.282 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.282 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.282 01:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.282 01:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.282 01:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.282 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.283 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.283 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.540 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:02.540 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.540 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:02.540 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:02.540 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.541 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.541 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.541 01:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.541 01:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.541 01:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.541 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.541 01:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.474 00:16:03.474 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.474 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.474 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.732 { 00:16:03.732 "cntlid": 93, 00:16:03.732 "qid": 0, 00:16:03.732 "state": "enabled", 00:16:03.732 "thread": "nvmf_tgt_poll_group_000", 00:16:03.732 "listen_address": { 00:16:03.732 "trtype": "TCP", 00:16:03.732 "adrfam": "IPv4", 00:16:03.732 "traddr": "10.0.0.2", 00:16:03.732 "trsvcid": "4420" 00:16:03.732 }, 00:16:03.732 "peer_address": { 00:16:03.732 "trtype": "TCP", 00:16:03.732 "adrfam": "IPv4", 00:16:03.732 "traddr": "10.0.0.1", 00:16:03.732 "trsvcid": "38190" 00:16:03.732 }, 00:16:03.732 "auth": { 00:16:03.732 "state": "completed", 00:16:03.732 "digest": "sha384", 00:16:03.732 "dhgroup": "ffdhe8192" 00:16:03.732 } 00:16:03.732 } 00:16:03.732 ]' 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.732 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.990 01:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.921 01:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.177 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:05.177 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.177 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:05.177 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:05.177 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.178 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.178 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:05.178 01:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.178 01:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.178 01:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.178 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.178 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.108 00:16:06.108 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.108 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.108 01:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.365 { 00:16:06.365 "cntlid": 95, 00:16:06.365 "qid": 0, 00:16:06.365 "state": "enabled", 00:16:06.365 "thread": "nvmf_tgt_poll_group_000", 00:16:06.365 "listen_address": { 00:16:06.365 "trtype": "TCP", 00:16:06.365 "adrfam": "IPv4", 00:16:06.365 "traddr": "10.0.0.2", 00:16:06.365 "trsvcid": "4420" 00:16:06.365 }, 00:16:06.365 "peer_address": { 00:16:06.365 "trtype": "TCP", 00:16:06.365 "adrfam": "IPv4", 00:16:06.365 "traddr": "10.0.0.1", 00:16:06.365 "trsvcid": "38218" 00:16:06.365 }, 00:16:06.365 "auth": { 00:16:06.365 "state": "completed", 00:16:06.365 "digest": "sha384", 00:16:06.365 "dhgroup": "ffdhe8192" 00:16:06.365 } 00:16:06.365 } 00:16:06.365 ]' 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.365 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.621 01:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.550 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.807 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.063 00:16:08.063 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.063 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.063 01:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.332 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.332 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.332 01:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.332 01:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.332 01:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.332 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.332 { 00:16:08.332 "cntlid": 97, 00:16:08.332 "qid": 0, 00:16:08.332 "state": "enabled", 00:16:08.332 "thread": "nvmf_tgt_poll_group_000", 00:16:08.332 "listen_address": { 00:16:08.332 "trtype": "TCP", 00:16:08.332 "adrfam": "IPv4", 00:16:08.332 "traddr": "10.0.0.2", 00:16:08.332 "trsvcid": "4420" 00:16:08.332 }, 00:16:08.332 "peer_address": { 00:16:08.332 "trtype": "TCP", 00:16:08.332 "adrfam": "IPv4", 00:16:08.332 "traddr": "10.0.0.1", 00:16:08.332 "trsvcid": "38240" 00:16:08.332 }, 00:16:08.332 "auth": { 00:16:08.332 "state": "completed", 00:16:08.332 "digest": "sha512", 00:16:08.333 "dhgroup": "null" 00:16:08.333 } 00:16:08.333 } 00:16:08.333 ]' 00:16:08.333 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.333 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.333 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.333 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:08.611 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.612 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.612 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.612 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.869 01:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.802 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.803 01:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.803 01:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.803 01:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.803 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.803 01:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.060 00:16:10.060 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.060 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.060 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.318 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.318 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.318 01:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.318 01:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.318 01:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.318 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.318 { 00:16:10.318 "cntlid": 99, 00:16:10.318 "qid": 0, 00:16:10.318 "state": "enabled", 00:16:10.318 "thread": "nvmf_tgt_poll_group_000", 00:16:10.318 "listen_address": { 00:16:10.318 "trtype": "TCP", 00:16:10.318 "adrfam": "IPv4", 00:16:10.318 "traddr": "10.0.0.2", 00:16:10.318 "trsvcid": "4420" 00:16:10.318 }, 00:16:10.318 "peer_address": { 00:16:10.318 "trtype": "TCP", 00:16:10.318 "adrfam": "IPv4", 00:16:10.318 "traddr": "10.0.0.1", 00:16:10.318 "trsvcid": "46110" 00:16:10.318 }, 00:16:10.318 "auth": { 00:16:10.318 "state": "completed", 00:16:10.318 "digest": "sha512", 00:16:10.318 "dhgroup": "null" 00:16:10.318 } 00:16:10.318 } 00:16:10.318 ]' 00:16:10.318 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.575 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.575 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.576 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:10.576 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.576 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.576 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.576 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.833 01:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.767 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.025 01:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.283 00:16:12.283 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.283 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.283 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.541 { 00:16:12.541 "cntlid": 101, 00:16:12.541 "qid": 0, 00:16:12.541 "state": "enabled", 00:16:12.541 "thread": "nvmf_tgt_poll_group_000", 00:16:12.541 "listen_address": { 00:16:12.541 "trtype": "TCP", 00:16:12.541 "adrfam": "IPv4", 00:16:12.541 "traddr": "10.0.0.2", 00:16:12.541 "trsvcid": "4420" 00:16:12.541 }, 00:16:12.541 "peer_address": { 00:16:12.541 "trtype": "TCP", 00:16:12.541 "adrfam": "IPv4", 00:16:12.541 "traddr": "10.0.0.1", 00:16:12.541 "trsvcid": "46142" 00:16:12.541 }, 00:16:12.541 "auth": { 00:16:12.541 "state": "completed", 00:16:12.541 "digest": "sha512", 00:16:12.541 "dhgroup": "null" 00:16:12.541 } 00:16:12.541 } 00:16:12.541 ]' 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.541 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.799 01:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.734 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.992 01:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.558 00:16:14.558 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.558 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.558 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.558 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.558 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.558 01:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.558 01:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.815 01:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.815 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.815 { 00:16:14.815 "cntlid": 103, 00:16:14.815 "qid": 0, 00:16:14.815 "state": "enabled", 00:16:14.815 "thread": "nvmf_tgt_poll_group_000", 00:16:14.815 "listen_address": { 00:16:14.815 "trtype": "TCP", 00:16:14.815 "adrfam": "IPv4", 00:16:14.815 "traddr": "10.0.0.2", 00:16:14.816 "trsvcid": "4420" 00:16:14.816 }, 00:16:14.816 "peer_address": { 00:16:14.816 "trtype": "TCP", 00:16:14.816 "adrfam": "IPv4", 00:16:14.816 "traddr": "10.0.0.1", 00:16:14.816 "trsvcid": "46176" 00:16:14.816 }, 00:16:14.816 "auth": { 00:16:14.816 "state": "completed", 00:16:14.816 "digest": "sha512", 00:16:14.816 "dhgroup": "null" 00:16:14.816 } 00:16:14.816 } 00:16:14.816 ]' 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.816 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.074 01:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.006 01:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.264 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.522 00:16:16.522 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.522 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.522 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.780 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.780 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.780 01:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.780 01:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.780 01:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.780 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.780 { 00:16:16.780 "cntlid": 105, 00:16:16.780 "qid": 0, 00:16:16.780 "state": "enabled", 00:16:16.780 "thread": "nvmf_tgt_poll_group_000", 00:16:16.780 "listen_address": { 00:16:16.780 "trtype": "TCP", 00:16:16.780 "adrfam": "IPv4", 00:16:16.780 "traddr": "10.0.0.2", 00:16:16.780 "trsvcid": "4420" 00:16:16.780 }, 00:16:16.780 "peer_address": { 00:16:16.780 "trtype": "TCP", 00:16:16.780 "adrfam": "IPv4", 00:16:16.780 "traddr": "10.0.0.1", 00:16:16.780 "trsvcid": "46202" 00:16:16.780 }, 00:16:16.780 "auth": { 00:16:16.780 "state": "completed", 00:16:16.780 "digest": "sha512", 00:16:16.780 "dhgroup": "ffdhe2048" 00:16:16.780 } 00:16:16.780 } 00:16:16.780 ]' 00:16:16.780 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.038 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.038 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.038 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.038 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.038 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.038 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.038 01:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.296 01:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.228 01:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.486 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.744 00:16:18.744 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.744 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.744 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.001 { 00:16:19.001 "cntlid": 107, 00:16:19.001 "qid": 0, 00:16:19.001 "state": "enabled", 00:16:19.001 "thread": "nvmf_tgt_poll_group_000", 00:16:19.001 "listen_address": { 00:16:19.001 "trtype": "TCP", 00:16:19.001 "adrfam": "IPv4", 00:16:19.001 "traddr": "10.0.0.2", 00:16:19.001 "trsvcid": "4420" 00:16:19.001 }, 00:16:19.001 "peer_address": { 00:16:19.001 "trtype": "TCP", 00:16:19.001 "adrfam": "IPv4", 00:16:19.001 "traddr": "10.0.0.1", 00:16:19.001 "trsvcid": "46240" 00:16:19.001 }, 00:16:19.001 "auth": { 00:16:19.001 "state": "completed", 00:16:19.001 "digest": "sha512", 00:16:19.001 "dhgroup": "ffdhe2048" 00:16:19.001 } 00:16:19.001 } 00:16:19.001 ]' 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.001 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.257 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.257 01:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.257 01:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.257 01:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.257 01:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.514 01:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.447 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.704 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.962 00:16:20.962 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.962 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.962 01:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.219 { 00:16:21.219 "cntlid": 109, 00:16:21.219 "qid": 0, 00:16:21.219 "state": "enabled", 00:16:21.219 "thread": "nvmf_tgt_poll_group_000", 00:16:21.219 "listen_address": { 00:16:21.219 "trtype": "TCP", 00:16:21.219 "adrfam": "IPv4", 00:16:21.219 "traddr": "10.0.0.2", 00:16:21.219 "trsvcid": "4420" 00:16:21.219 }, 00:16:21.219 "peer_address": { 00:16:21.219 "trtype": "TCP", 00:16:21.219 "adrfam": "IPv4", 00:16:21.219 "traddr": "10.0.0.1", 00:16:21.219 "trsvcid": "46364" 00:16:21.219 }, 00:16:21.219 "auth": { 00:16:21.219 "state": "completed", 00:16:21.219 "digest": "sha512", 00:16:21.219 "dhgroup": "ffdhe2048" 00:16:21.219 } 00:16:21.219 } 00:16:21.219 ]' 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.219 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.477 01:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.431 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.688 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.946 00:16:22.946 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.946 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.946 01:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.202 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.202 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.202 01:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.202 01:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.202 01:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.202 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.202 { 00:16:23.202 "cntlid": 111, 00:16:23.202 "qid": 0, 00:16:23.202 "state": "enabled", 00:16:23.202 "thread": "nvmf_tgt_poll_group_000", 00:16:23.202 "listen_address": { 00:16:23.202 "trtype": "TCP", 00:16:23.202 "adrfam": "IPv4", 00:16:23.202 "traddr": "10.0.0.2", 00:16:23.202 "trsvcid": "4420" 00:16:23.203 }, 00:16:23.203 "peer_address": { 00:16:23.203 "trtype": "TCP", 00:16:23.203 "adrfam": "IPv4", 00:16:23.203 "traddr": "10.0.0.1", 00:16:23.203 "trsvcid": "46392" 00:16:23.203 }, 00:16:23.203 "auth": { 00:16:23.203 "state": "completed", 00:16:23.203 "digest": "sha512", 00:16:23.203 "dhgroup": "ffdhe2048" 00:16:23.203 } 00:16:23.203 } 00:16:23.203 ]' 00:16:23.203 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.459 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.459 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.459 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.459 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.459 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.459 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.459 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.716 01:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.729 01:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.295 00:16:25.295 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.295 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.295 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.295 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.295 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.295 01:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.295 01:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.552 { 00:16:25.552 "cntlid": 113, 00:16:25.552 "qid": 0, 00:16:25.552 "state": "enabled", 00:16:25.552 "thread": "nvmf_tgt_poll_group_000", 00:16:25.552 "listen_address": { 00:16:25.552 "trtype": "TCP", 00:16:25.552 "adrfam": "IPv4", 00:16:25.552 "traddr": "10.0.0.2", 00:16:25.552 "trsvcid": "4420" 00:16:25.552 }, 00:16:25.552 "peer_address": { 00:16:25.552 "trtype": "TCP", 00:16:25.552 "adrfam": "IPv4", 00:16:25.552 "traddr": "10.0.0.1", 00:16:25.552 "trsvcid": "46412" 00:16:25.552 }, 00:16:25.552 "auth": { 00:16:25.552 "state": "completed", 00:16:25.552 "digest": "sha512", 00:16:25.552 "dhgroup": "ffdhe3072" 00:16:25.552 } 00:16:25.552 } 00:16:25.552 ]' 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.552 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.810 01:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.740 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 01:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.998 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.998 01:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.255 00:16:27.255 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.255 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.255 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.513 { 00:16:27.513 "cntlid": 115, 00:16:27.513 "qid": 0, 00:16:27.513 "state": "enabled", 00:16:27.513 "thread": "nvmf_tgt_poll_group_000", 00:16:27.513 "listen_address": { 00:16:27.513 "trtype": "TCP", 00:16:27.513 "adrfam": "IPv4", 00:16:27.513 "traddr": "10.0.0.2", 00:16:27.513 "trsvcid": "4420" 00:16:27.513 }, 00:16:27.513 "peer_address": { 00:16:27.513 "trtype": "TCP", 00:16:27.513 "adrfam": "IPv4", 00:16:27.513 "traddr": "10.0.0.1", 00:16:27.513 "trsvcid": "46448" 00:16:27.513 }, 00:16:27.513 "auth": { 00:16:27.513 "state": "completed", 00:16:27.513 "digest": "sha512", 00:16:27.513 "dhgroup": "ffdhe3072" 00:16:27.513 } 00:16:27.513 } 00:16:27.513 ]' 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.513 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.770 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.770 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.770 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.028 01:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.961 01:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.526 00:16:29.526 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.526 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.526 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.526 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.783 { 00:16:29.783 "cntlid": 117, 00:16:29.783 "qid": 0, 00:16:29.783 "state": "enabled", 00:16:29.783 "thread": "nvmf_tgt_poll_group_000", 00:16:29.783 "listen_address": { 00:16:29.783 "trtype": "TCP", 00:16:29.783 "adrfam": "IPv4", 00:16:29.783 "traddr": "10.0.0.2", 00:16:29.783 "trsvcid": "4420" 00:16:29.783 }, 00:16:29.783 "peer_address": { 00:16:29.783 "trtype": "TCP", 00:16:29.783 "adrfam": "IPv4", 00:16:29.783 "traddr": "10.0.0.1", 00:16:29.783 "trsvcid": "46470" 00:16:29.783 }, 00:16:29.783 "auth": { 00:16:29.783 "state": "completed", 00:16:29.783 "digest": "sha512", 00:16:29.783 "dhgroup": "ffdhe3072" 00:16:29.783 } 00:16:29.783 } 00:16:29.783 ]' 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.783 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.041 01:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:16:30.973 01:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.973 01:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.974 01:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.974 01:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.974 01:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.974 01:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.974 01:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:30.974 01:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.257 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.515 00:16:31.515 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.515 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.515 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.774 { 00:16:31.774 "cntlid": 119, 00:16:31.774 "qid": 0, 00:16:31.774 "state": "enabled", 00:16:31.774 "thread": "nvmf_tgt_poll_group_000", 00:16:31.774 "listen_address": { 00:16:31.774 "trtype": "TCP", 00:16:31.774 "adrfam": "IPv4", 00:16:31.774 "traddr": "10.0.0.2", 00:16:31.774 "trsvcid": "4420" 00:16:31.774 }, 00:16:31.774 "peer_address": { 00:16:31.774 "trtype": "TCP", 00:16:31.774 "adrfam": "IPv4", 00:16:31.774 "traddr": "10.0.0.1", 00:16:31.774 "trsvcid": "60840" 00:16:31.774 }, 00:16:31.774 "auth": { 00:16:31.774 "state": "completed", 00:16:31.774 "digest": "sha512", 00:16:31.774 "dhgroup": "ffdhe3072" 00:16:31.774 } 00:16:31.774 } 00:16:31.774 ]' 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.774 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.032 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.032 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.032 01:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.290 01:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.224 01:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.224 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.789 00:16:33.789 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.789 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.789 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.048 { 00:16:34.048 "cntlid": 121, 00:16:34.048 "qid": 0, 00:16:34.048 "state": "enabled", 00:16:34.048 "thread": "nvmf_tgt_poll_group_000", 00:16:34.048 "listen_address": { 00:16:34.048 "trtype": "TCP", 00:16:34.048 "adrfam": "IPv4", 00:16:34.048 "traddr": "10.0.0.2", 00:16:34.048 "trsvcid": "4420" 00:16:34.048 }, 00:16:34.048 "peer_address": { 00:16:34.048 "trtype": "TCP", 00:16:34.048 "adrfam": "IPv4", 00:16:34.048 "traddr": "10.0.0.1", 00:16:34.048 "trsvcid": "60864" 00:16:34.048 }, 00:16:34.048 "auth": { 00:16:34.048 "state": "completed", 00:16:34.048 "digest": "sha512", 00:16:34.048 "dhgroup": "ffdhe4096" 00:16:34.048 } 00:16:34.048 } 00:16:34.048 ]' 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.048 01:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.306 01:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.239 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.497 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.755 00:16:36.013 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.014 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.014 01:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.271 { 00:16:36.271 "cntlid": 123, 00:16:36.271 "qid": 0, 00:16:36.271 "state": "enabled", 00:16:36.271 "thread": "nvmf_tgt_poll_group_000", 00:16:36.271 "listen_address": { 00:16:36.271 "trtype": "TCP", 00:16:36.271 "adrfam": "IPv4", 00:16:36.271 "traddr": "10.0.0.2", 00:16:36.271 "trsvcid": "4420" 00:16:36.271 }, 00:16:36.271 "peer_address": { 00:16:36.271 "trtype": "TCP", 00:16:36.271 "adrfam": "IPv4", 00:16:36.271 "traddr": "10.0.0.1", 00:16:36.271 "trsvcid": "60892" 00:16:36.271 }, 00:16:36.271 "auth": { 00:16:36.271 "state": "completed", 00:16:36.271 "digest": "sha512", 00:16:36.271 "dhgroup": "ffdhe4096" 00:16:36.271 } 00:16:36.271 } 00:16:36.271 ]' 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.271 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.529 01:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.460 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.718 01:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.282 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.282 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.282 { 00:16:38.282 "cntlid": 125, 00:16:38.282 "qid": 0, 00:16:38.282 "state": "enabled", 00:16:38.282 "thread": "nvmf_tgt_poll_group_000", 00:16:38.282 "listen_address": { 00:16:38.282 "trtype": "TCP", 00:16:38.282 "adrfam": "IPv4", 00:16:38.282 "traddr": "10.0.0.2", 00:16:38.282 "trsvcid": "4420" 00:16:38.282 }, 00:16:38.282 "peer_address": { 00:16:38.282 "trtype": "TCP", 00:16:38.282 "adrfam": "IPv4", 00:16:38.282 "traddr": "10.0.0.1", 00:16:38.282 "trsvcid": "60916" 00:16:38.282 }, 00:16:38.282 "auth": { 00:16:38.282 "state": "completed", 00:16:38.282 "digest": "sha512", 00:16:38.282 "dhgroup": "ffdhe4096" 00:16:38.282 } 00:16:38.282 } 00:16:38.282 ]' 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.539 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.797 01:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.728 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.986 01:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.553 00:16:40.553 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.553 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.553 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.843 { 00:16:40.843 "cntlid": 127, 00:16:40.843 "qid": 0, 00:16:40.843 "state": "enabled", 00:16:40.843 "thread": "nvmf_tgt_poll_group_000", 00:16:40.843 "listen_address": { 00:16:40.843 "trtype": "TCP", 00:16:40.843 "adrfam": "IPv4", 00:16:40.843 "traddr": "10.0.0.2", 00:16:40.843 "trsvcid": "4420" 00:16:40.843 }, 00:16:40.843 "peer_address": { 00:16:40.843 "trtype": "TCP", 00:16:40.843 "adrfam": "IPv4", 00:16:40.843 "traddr": "10.0.0.1", 00:16:40.843 "trsvcid": "42052" 00:16:40.843 }, 00:16:40.843 "auth": { 00:16:40.843 "state": "completed", 00:16:40.843 "digest": "sha512", 00:16:40.843 "dhgroup": "ffdhe4096" 00:16:40.843 } 00:16:40.843 } 00:16:40.843 ]' 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.843 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.100 01:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:16:42.033 01:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.033 01:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.033 01:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.033 01:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.033 01:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.033 01:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.033 01:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.034 01:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.034 01:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.292 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.858 00:16:42.858 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.858 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.858 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.116 { 00:16:43.116 "cntlid": 129, 00:16:43.116 "qid": 0, 00:16:43.116 "state": "enabled", 00:16:43.116 "thread": "nvmf_tgt_poll_group_000", 00:16:43.116 "listen_address": { 00:16:43.116 "trtype": "TCP", 00:16:43.116 "adrfam": "IPv4", 00:16:43.116 "traddr": "10.0.0.2", 00:16:43.116 "trsvcid": "4420" 00:16:43.116 }, 00:16:43.116 "peer_address": { 00:16:43.116 "trtype": "TCP", 00:16:43.116 "adrfam": "IPv4", 00:16:43.116 "traddr": "10.0.0.1", 00:16:43.116 "trsvcid": "42086" 00:16:43.116 }, 00:16:43.116 "auth": { 00:16:43.116 "state": "completed", 00:16:43.116 "digest": "sha512", 00:16:43.116 "dhgroup": "ffdhe6144" 00:16:43.116 } 00:16:43.116 } 00:16:43.116 ]' 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.116 01:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.116 01:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.116 01:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.116 01:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.116 01:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.116 01:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.374 01:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.306 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.563 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.127 00:16:45.127 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.127 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.127 01:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.384 { 00:16:45.384 "cntlid": 131, 00:16:45.384 "qid": 0, 00:16:45.384 "state": "enabled", 00:16:45.384 "thread": "nvmf_tgt_poll_group_000", 00:16:45.384 "listen_address": { 00:16:45.384 "trtype": "TCP", 00:16:45.384 "adrfam": "IPv4", 00:16:45.384 "traddr": "10.0.0.2", 00:16:45.384 "trsvcid": "4420" 00:16:45.384 }, 00:16:45.384 "peer_address": { 00:16:45.384 "trtype": "TCP", 00:16:45.384 "adrfam": "IPv4", 00:16:45.384 "traddr": "10.0.0.1", 00:16:45.384 "trsvcid": "42102" 00:16:45.384 }, 00:16:45.384 "auth": { 00:16:45.384 "state": "completed", 00:16:45.384 "digest": "sha512", 00:16:45.384 "dhgroup": "ffdhe6144" 00:16:45.384 } 00:16:45.384 } 00:16:45.384 ]' 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.384 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.946 01:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.876 01:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.134 01:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.134 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.134 01:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.698 00:16:47.698 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.698 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.698 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.698 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.698 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.956 { 00:16:47.956 "cntlid": 133, 00:16:47.956 "qid": 0, 00:16:47.956 "state": "enabled", 00:16:47.956 "thread": "nvmf_tgt_poll_group_000", 00:16:47.956 "listen_address": { 00:16:47.956 "trtype": "TCP", 00:16:47.956 "adrfam": "IPv4", 00:16:47.956 "traddr": "10.0.0.2", 00:16:47.956 "trsvcid": "4420" 00:16:47.956 }, 00:16:47.956 "peer_address": { 00:16:47.956 "trtype": "TCP", 00:16:47.956 "adrfam": "IPv4", 00:16:47.956 "traddr": "10.0.0.1", 00:16:47.956 "trsvcid": "42134" 00:16:47.956 }, 00:16:47.956 "auth": { 00:16:47.956 "state": "completed", 00:16:47.956 "digest": "sha512", 00:16:47.956 "dhgroup": "ffdhe6144" 00:16:47.956 } 00:16:47.956 } 00:16:47.956 ]' 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.956 01:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.212 01:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.142 01:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.399 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.964 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.964 { 00:16:49.964 "cntlid": 135, 00:16:49.964 "qid": 0, 00:16:49.964 "state": "enabled", 00:16:49.964 "thread": "nvmf_tgt_poll_group_000", 00:16:49.964 "listen_address": { 00:16:49.964 "trtype": "TCP", 00:16:49.964 "adrfam": "IPv4", 00:16:49.964 "traddr": "10.0.0.2", 00:16:49.964 "trsvcid": "4420" 00:16:49.964 }, 00:16:49.964 "peer_address": { 00:16:49.964 "trtype": "TCP", 00:16:49.964 "adrfam": "IPv4", 00:16:49.964 "traddr": "10.0.0.1", 00:16:49.964 "trsvcid": "58980" 00:16:49.964 }, 00:16:49.964 "auth": { 00:16:49.964 "state": "completed", 00:16:49.964 "digest": "sha512", 00:16:49.964 "dhgroup": "ffdhe6144" 00:16:49.964 } 00:16:49.964 } 00:16:49.964 ]' 00:16:49.964 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.222 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.222 01:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.222 01:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.222 01:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.222 01:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.222 01:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.222 01:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.480 01:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.413 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.671 01:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.599 00:16:52.599 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.599 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.599 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.856 { 00:16:52.856 "cntlid": 137, 00:16:52.856 "qid": 0, 00:16:52.856 "state": "enabled", 00:16:52.856 "thread": "nvmf_tgt_poll_group_000", 00:16:52.856 "listen_address": { 00:16:52.856 "trtype": "TCP", 00:16:52.856 "adrfam": "IPv4", 00:16:52.856 "traddr": "10.0.0.2", 00:16:52.856 "trsvcid": "4420" 00:16:52.856 }, 00:16:52.856 "peer_address": { 00:16:52.856 "trtype": "TCP", 00:16:52.856 "adrfam": "IPv4", 00:16:52.856 "traddr": "10.0.0.1", 00:16:52.856 "trsvcid": "59014" 00:16:52.856 }, 00:16:52.856 "auth": { 00:16:52.856 "state": "completed", 00:16:52.856 "digest": "sha512", 00:16:52.856 "dhgroup": "ffdhe8192" 00:16:52.856 } 00:16:52.856 } 00:16:52.856 ]' 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.856 01:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.113 01:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.046 01:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.302 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.236 00:16:55.236 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.236 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.236 01:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.236 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.236 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.236 01:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.236 01:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.236 01:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.236 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.236 { 00:16:55.236 "cntlid": 139, 00:16:55.236 "qid": 0, 00:16:55.236 "state": "enabled", 00:16:55.236 "thread": "nvmf_tgt_poll_group_000", 00:16:55.236 "listen_address": { 00:16:55.236 "trtype": "TCP", 00:16:55.236 "adrfam": "IPv4", 00:16:55.236 "traddr": "10.0.0.2", 00:16:55.236 "trsvcid": "4420" 00:16:55.236 }, 00:16:55.236 "peer_address": { 00:16:55.236 "trtype": "TCP", 00:16:55.236 "adrfam": "IPv4", 00:16:55.236 "traddr": "10.0.0.1", 00:16:55.236 "trsvcid": "59034" 00:16:55.236 }, 00:16:55.236 "auth": { 00:16:55.236 "state": "completed", 00:16:55.236 "digest": "sha512", 00:16:55.236 "dhgroup": "ffdhe8192" 00:16:55.236 } 00:16:55.236 } 00:16:55.236 ]' 00:16:55.236 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.493 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.493 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.493 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.493 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.493 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.493 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.493 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.750 01:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZDZlY2UwMmUyMDFjYThlZjVmNzQ3YTViMjNlNjNiYjI2oAMn: --dhchap-ctrl-secret DHHC-1:02:NDAxNTMxYTFiODFmNjdlZDhmNTMwZDUxZDVhNTFhOTNjYjczNmNiNGE2ODc5MDNj5nwdrQ==: 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.681 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.939 01:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.507 00:16:57.507 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.507 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.507 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.821 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.821 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.821 01:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.821 01:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.821 01:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.821 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.821 { 00:16:57.821 "cntlid": 141, 00:16:57.821 "qid": 0, 00:16:57.821 "state": "enabled", 00:16:57.821 "thread": "nvmf_tgt_poll_group_000", 00:16:57.821 "listen_address": { 00:16:57.821 "trtype": "TCP", 00:16:57.821 "adrfam": "IPv4", 00:16:57.821 "traddr": "10.0.0.2", 00:16:57.821 "trsvcid": "4420" 00:16:57.821 }, 00:16:57.821 "peer_address": { 00:16:57.821 "trtype": "TCP", 00:16:57.821 "adrfam": "IPv4", 00:16:57.821 "traddr": "10.0.0.1", 00:16:57.821 "trsvcid": "59072" 00:16:57.821 }, 00:16:57.821 "auth": { 00:16:57.821 "state": "completed", 00:16:57.821 "digest": "sha512", 00:16:57.821 "dhgroup": "ffdhe8192" 00:16:57.821 } 00:16:57.821 } 00:16:57.822 ]' 00:16:57.822 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.822 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.822 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.079 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.079 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.079 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.079 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.079 01:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.336 01:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGJjMTg0NzFjMmQ0Y2RjOTUyZjBhOTg0MzEwYTA5ZTZiNzdkYTU3MTdhMmY5Njlkkzt+ZA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5YjUxMzU1YTE1MzlkOTM5NzM3N2YwMjMwZjg5N2Lku1mG: 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.268 01:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.526 01:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.527 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.527 01:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.458 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.458 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.458 { 00:17:00.458 "cntlid": 143, 00:17:00.458 "qid": 0, 00:17:00.458 "state": "enabled", 00:17:00.458 "thread": "nvmf_tgt_poll_group_000", 00:17:00.459 "listen_address": { 00:17:00.459 "trtype": "TCP", 00:17:00.459 "adrfam": "IPv4", 00:17:00.459 "traddr": "10.0.0.2", 00:17:00.459 "trsvcid": "4420" 00:17:00.459 }, 00:17:00.459 "peer_address": { 00:17:00.459 "trtype": "TCP", 00:17:00.459 "adrfam": "IPv4", 00:17:00.459 "traddr": "10.0.0.1", 00:17:00.459 "trsvcid": "38116" 00:17:00.459 }, 00:17:00.459 "auth": { 00:17:00.459 "state": "completed", 00:17:00.459 "digest": "sha512", 00:17:00.459 "dhgroup": "ffdhe8192" 00:17:00.459 } 00:17:00.459 } 00:17:00.459 ]' 00:17:00.459 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.716 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.716 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.716 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.716 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.716 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.716 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.716 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.975 01:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.909 01:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.166 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.098 00:17:03.098 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.098 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.098 01:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.354 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.354 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.354 01:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.354 01:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.354 01:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.354 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.354 { 00:17:03.354 "cntlid": 145, 00:17:03.354 "qid": 0, 00:17:03.354 "state": "enabled", 00:17:03.354 "thread": "nvmf_tgt_poll_group_000", 00:17:03.354 "listen_address": { 00:17:03.355 "trtype": "TCP", 00:17:03.355 "adrfam": "IPv4", 00:17:03.355 "traddr": "10.0.0.2", 00:17:03.355 "trsvcid": "4420" 00:17:03.355 }, 00:17:03.355 "peer_address": { 00:17:03.355 "trtype": "TCP", 00:17:03.355 "adrfam": "IPv4", 00:17:03.355 "traddr": "10.0.0.1", 00:17:03.355 "trsvcid": "38126" 00:17:03.355 }, 00:17:03.355 "auth": { 00:17:03.355 "state": "completed", 00:17:03.355 "digest": "sha512", 00:17:03.355 "dhgroup": "ffdhe8192" 00:17:03.355 } 00:17:03.355 } 00:17:03.355 ]' 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.355 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.612 01:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MWQ3MDA2MzRiZTNkMzdhZjNkMTI4Y2M4ZmMzZmFmZDIyYmUzZWU1MTE4ZmU4NzkyfVinMQ==: --dhchap-ctrl-secret DHHC-1:03:MDg2MTkwY2U5MjNhNDYwNTFhYWE0ZDNkZWJmOTg4OGQ0ZmZjZGU0Y2NjYjRmZmE2MzViZDE5NTAwNzQ5NzEwMR0d3Sw=: 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:04.545 01:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:05.475 request: 00:17:05.475 { 00:17:05.475 "name": "nvme0", 00:17:05.475 "trtype": "tcp", 00:17:05.475 "traddr": "10.0.0.2", 00:17:05.475 "adrfam": "ipv4", 00:17:05.475 "trsvcid": "4420", 00:17:05.475 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:05.475 "prchk_reftag": false, 00:17:05.475 "prchk_guard": false, 00:17:05.475 "hdgst": false, 00:17:05.475 "ddgst": false, 00:17:05.475 "dhchap_key": "key2", 00:17:05.475 "method": "bdev_nvme_attach_controller", 00:17:05.475 "req_id": 1 00:17:05.475 } 00:17:05.475 Got JSON-RPC error response 00:17:05.475 response: 00:17:05.475 { 00:17:05.475 "code": -5, 00:17:05.475 "message": "Input/output error" 00:17:05.475 } 00:17:05.475 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:05.475 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:05.475 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.476 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:06.041 request: 00:17:06.041 { 00:17:06.041 "name": "nvme0", 00:17:06.041 "trtype": "tcp", 00:17:06.041 "traddr": "10.0.0.2", 00:17:06.041 "adrfam": "ipv4", 00:17:06.041 "trsvcid": "4420", 00:17:06.041 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:06.041 "prchk_reftag": false, 00:17:06.041 "prchk_guard": false, 00:17:06.041 "hdgst": false, 00:17:06.041 "ddgst": false, 00:17:06.041 "dhchap_key": "key1", 00:17:06.041 "dhchap_ctrlr_key": "ckey2", 00:17:06.041 "method": "bdev_nvme_attach_controller", 00:17:06.041 "req_id": 1 00:17:06.041 } 00:17:06.041 Got JSON-RPC error response 00:17:06.041 response: 00:17:06.041 { 00:17:06.041 "code": -5, 00:17:06.041 "message": "Input/output error" 00:17:06.041 } 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.041 01:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.975 request: 00:17:06.975 { 00:17:06.975 "name": "nvme0", 00:17:06.975 "trtype": "tcp", 00:17:06.975 "traddr": "10.0.0.2", 00:17:06.975 "adrfam": "ipv4", 00:17:06.975 "trsvcid": "4420", 00:17:06.975 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:06.975 "prchk_reftag": false, 00:17:06.975 "prchk_guard": false, 00:17:06.975 "hdgst": false, 00:17:06.975 "ddgst": false, 00:17:06.975 "dhchap_key": "key1", 00:17:06.975 "dhchap_ctrlr_key": "ckey1", 00:17:06.975 "method": "bdev_nvme_attach_controller", 00:17:06.975 "req_id": 1 00:17:06.975 } 00:17:06.975 Got JSON-RPC error response 00:17:06.975 response: 00:17:06.975 { 00:17:06.975 "code": -5, 00:17:06.975 "message": "Input/output error" 00:17:06.975 } 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4141224 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4141224 ']' 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4141224 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4141224 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4141224' 00:17:06.975 killing process with pid 4141224 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4141224 00:17:06.975 01:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4141224 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4162982 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4162982 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4162982 ']' 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.233 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4162982 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4162982 ']' 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.491 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.748 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:07.748 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:07.748 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.748 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.005 01:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.952 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.952 { 00:17:08.952 "cntlid": 1, 00:17:08.952 "qid": 0, 00:17:08.952 "state": "enabled", 00:17:08.952 "thread": "nvmf_tgt_poll_group_000", 00:17:08.952 "listen_address": { 00:17:08.952 "trtype": "TCP", 00:17:08.952 "adrfam": "IPv4", 00:17:08.952 "traddr": "10.0.0.2", 00:17:08.952 "trsvcid": "4420" 00:17:08.952 }, 00:17:08.952 "peer_address": { 00:17:08.952 "trtype": "TCP", 00:17:08.952 "adrfam": "IPv4", 00:17:08.952 "traddr": "10.0.0.1", 00:17:08.952 "trsvcid": "38184" 00:17:08.952 }, 00:17:08.952 "auth": { 00:17:08.952 "state": "completed", 00:17:08.952 "digest": "sha512", 00:17:08.952 "dhgroup": "ffdhe8192" 00:17:08.952 } 00:17:08.952 } 00:17:08.952 ]' 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.952 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.210 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.210 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.210 01:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.467 01:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTRkODIxYzNmMjEyM2E4ODkyZmJjOGI3ODlmMWE2NzFmY2U2OTQ0MDAwOWU4MTlmNDgxMjcwZGY1MjFjY2RhOCZ2p/Y=: 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.401 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.659 request: 00:17:10.659 { 00:17:10.659 "name": "nvme0", 00:17:10.659 "trtype": "tcp", 00:17:10.659 "traddr": "10.0.0.2", 00:17:10.659 "adrfam": "ipv4", 00:17:10.659 "trsvcid": "4420", 00:17:10.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:10.659 "prchk_reftag": false, 00:17:10.659 "prchk_guard": false, 00:17:10.659 "hdgst": false, 00:17:10.659 "ddgst": false, 00:17:10.659 "dhchap_key": "key3", 00:17:10.659 "method": "bdev_nvme_attach_controller", 00:17:10.659 "req_id": 1 00:17:10.659 } 00:17:10.659 Got JSON-RPC error response 00:17:10.659 response: 00:17:10.659 { 00:17:10.659 "code": -5, 00:17:10.659 "message": "Input/output error" 00:17:10.659 } 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:10.659 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.917 01:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.174 request: 00:17:11.174 { 00:17:11.174 "name": "nvme0", 00:17:11.174 "trtype": "tcp", 00:17:11.174 "traddr": "10.0.0.2", 00:17:11.174 "adrfam": "ipv4", 00:17:11.174 "trsvcid": "4420", 00:17:11.174 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:11.174 "prchk_reftag": false, 00:17:11.174 "prchk_guard": false, 00:17:11.174 "hdgst": false, 00:17:11.174 "ddgst": false, 00:17:11.174 "dhchap_key": "key3", 00:17:11.174 "method": "bdev_nvme_attach_controller", 00:17:11.174 "req_id": 1 00:17:11.174 } 00:17:11.174 Got JSON-RPC error response 00:17:11.174 response: 00:17:11.174 { 00:17:11.174 "code": -5, 00:17:11.174 "message": "Input/output error" 00:17:11.174 } 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.174 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.432 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.690 request: 00:17:11.690 { 00:17:11.690 "name": "nvme0", 00:17:11.690 "trtype": "tcp", 00:17:11.690 "traddr": "10.0.0.2", 00:17:11.690 "adrfam": "ipv4", 00:17:11.690 "trsvcid": "4420", 00:17:11.690 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:11.690 "prchk_reftag": false, 00:17:11.690 "prchk_guard": false, 00:17:11.690 "hdgst": false, 00:17:11.690 "ddgst": false, 00:17:11.690 "dhchap_key": "key0", 00:17:11.690 "dhchap_ctrlr_key": "key1", 00:17:11.690 "method": "bdev_nvme_attach_controller", 00:17:11.690 "req_id": 1 00:17:11.690 } 00:17:11.690 Got JSON-RPC error response 00:17:11.690 response: 00:17:11.690 { 00:17:11.690 "code": -5, 00:17:11.690 "message": "Input/output error" 00:17:11.690 } 00:17:11.690 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:11.690 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.690 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.690 01:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.690 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.690 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.948 00:17:12.206 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:12.206 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:12.206 01:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.206 01:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.206 01:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.206 01:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4141243 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4141243 ']' 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4141243 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.464 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4141243 00:17:12.722 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:12.722 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:12.722 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4141243' 00:17:12.722 killing process with pid 4141243 00:17:12.722 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4141243 00:17:12.722 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4141243 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.980 rmmod nvme_tcp 00:17:12.980 rmmod nvme_fabrics 00:17:12.980 rmmod nvme_keyring 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4162982 ']' 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4162982 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4162982 ']' 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4162982 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4162982 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4162982' 00:17:12.980 killing process with pid 4162982 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4162982 00:17:12.980 01:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4162982 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.239 01:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.831 01:10:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.831 01:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xYV /tmp/spdk.key-sha256.dJ9 /tmp/spdk.key-sha384.f6b /tmp/spdk.key-sha512.woO /tmp/spdk.key-sha512.1XE /tmp/spdk.key-sha384.tGw /tmp/spdk.key-sha256.GPm '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:15.831 00:17:15.831 real 3m1.121s 00:17:15.831 user 7m3.535s 00:17:15.831 sys 0m25.220s 00:17:15.831 01:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:15.831 01:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.831 ************************************ 00:17:15.831 END TEST nvmf_auth_target 00:17:15.831 ************************************ 00:17:15.831 01:10:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:15.831 01:10:31 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:15.831 01:10:31 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.831 01:10:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:15.831 01:10:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.831 01:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.831 ************************************ 00:17:15.831 START TEST nvmf_bdevio_no_huge 00:17:15.831 ************************************ 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.831 * Looking for test storage... 00:17:15.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.831 01:10:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.732 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:17.733 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:17.733 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:17.733 Found net devices under 0000:09:00.0: cvl_0_0 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:17.733 Found net devices under 0000:09:00.1: cvl_0_1 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:17:17.733 00:17:17.733 --- 10.0.0.2 ping statistics --- 00:17:17.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.733 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:17:17.733 00:17:17.733 --- 10.0.0.1 ping statistics --- 00:17:17.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.733 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4165744 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4165744 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 4165744 ']' 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.733 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.992 [2024-07-16 01:10:33.731167] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:17.992 [2024-07-16 01:10:33.731247] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:17.992 [2024-07-16 01:10:33.802147] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.992 [2024-07-16 01:10:33.900636] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.992 [2024-07-16 01:10:33.900691] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.992 [2024-07-16 01:10:33.900718] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.992 [2024-07-16 01:10:33.900729] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.992 [2024-07-16 01:10:33.900738] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.993 [2024-07-16 01:10:33.901232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:17.993 [2024-07-16 01:10:33.901295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:17.993 [2024-07-16 01:10:33.901335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:17.993 [2024-07-16 01:10:33.901338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.251 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.251 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:18.251 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.251 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.251 01:10:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.251 [2024-07-16 01:10:34.032124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.251 Malloc0 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.251 [2024-07-16 01:10:34.070320] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:18.251 { 00:17:18.251 "params": { 00:17:18.251 "name": "Nvme$subsystem", 00:17:18.251 "trtype": "$TEST_TRANSPORT", 00:17:18.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.251 "adrfam": "ipv4", 00:17:18.251 "trsvcid": "$NVMF_PORT", 00:17:18.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.251 "hdgst": ${hdgst:-false}, 00:17:18.251 "ddgst": ${ddgst:-false} 00:17:18.251 }, 00:17:18.251 "method": "bdev_nvme_attach_controller" 00:17:18.251 } 00:17:18.251 EOF 00:17:18.251 )") 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:18.251 01:10:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:18.251 "params": { 00:17:18.251 "name": "Nvme1", 00:17:18.251 "trtype": "tcp", 00:17:18.251 "traddr": "10.0.0.2", 00:17:18.251 "adrfam": "ipv4", 00:17:18.251 "trsvcid": "4420", 00:17:18.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.252 "hdgst": false, 00:17:18.252 "ddgst": false 00:17:18.252 }, 00:17:18.252 "method": "bdev_nvme_attach_controller" 00:17:18.252 }' 00:17:18.252 [2024-07-16 01:10:34.115520] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:18.252 [2024-07-16 01:10:34.115625] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4165777 ] 00:17:18.252 [2024-07-16 01:10:34.180091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.509 [2024-07-16 01:10:34.295744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.509 [2024-07-16 01:10:34.295794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.509 [2024-07-16 01:10:34.295798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.767 I/O targets: 00:17:18.767 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:18.767 00:17:18.767 00:17:18.767 CUnit - A unit testing framework for C - Version 2.1-3 00:17:18.767 http://cunit.sourceforge.net/ 00:17:18.767 00:17:18.767 00:17:18.767 Suite: bdevio tests on: Nvme1n1 00:17:18.767 Test: blockdev write read block ...passed 00:17:18.767 Test: blockdev write zeroes read block ...passed 00:17:18.767 Test: blockdev write zeroes read no split ...passed 00:17:18.767 Test: blockdev write zeroes read split ...passed 00:17:19.025 Test: blockdev write zeroes read split partial ...passed 00:17:19.025 Test: blockdev reset ...[2024-07-16 01:10:34.778384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.025 [2024-07-16 01:10:34.778488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f8100 (9): Bad file descriptor 00:17:19.025 [2024-07-16 01:10:34.794843] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.025 passed 00:17:19.025 Test: blockdev write read 8 blocks ...passed 00:17:19.025 Test: blockdev write read size > 128k ...passed 00:17:19.025 Test: blockdev write read invalid size ...passed 00:17:19.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:19.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:19.025 Test: blockdev write read max offset ...passed 00:17:19.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:19.025 Test: blockdev writev readv 8 blocks ...passed 00:17:19.025 Test: blockdev writev readv 30 x 1block ...passed 00:17:19.283 Test: blockdev writev readv block ...passed 00:17:19.283 Test: blockdev writev readv size > 128k ...passed 00:17:19.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:19.283 Test: blockdev comparev and writev ...[2024-07-16 01:10:35.050392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.050428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.050453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.050470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.050789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.050815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.050837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.050852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.051182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.051207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.051229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.051245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.051574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.051599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.051620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.283 [2024-07-16 01:10:35.051637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:19.283 passed 00:17:19.283 Test: blockdev nvme passthru rw ...passed 00:17:19.283 Test: blockdev nvme passthru vendor specific ...[2024-07-16 01:10:35.133231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.283 [2024-07-16 01:10:35.133258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.133423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.283 [2024-07-16 01:10:35.133446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.133610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.283 [2024-07-16 01:10:35.133633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:19.283 [2024-07-16 01:10:35.133803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.283 [2024-07-16 01:10:35.133827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:19.283 passed 00:17:19.283 Test: blockdev nvme admin passthru ...passed 00:17:19.283 Test: blockdev copy ...passed 00:17:19.283 00:17:19.283 Run Summary: Type Total Ran Passed Failed Inactive 00:17:19.283 suites 1 1 n/a 0 0 00:17:19.283 tests 23 23 23 0 0 00:17:19.283 asserts 152 152 152 0 n/a 00:17:19.283 00:17:19.283 Elapsed time = 1.229 seconds 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.850 rmmod nvme_tcp 00:17:19.850 rmmod nvme_fabrics 00:17:19.850 rmmod nvme_keyring 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4165744 ']' 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4165744 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 4165744 ']' 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 4165744 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4165744 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4165744' 00:17:19.850 killing process with pid 4165744 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 4165744 00:17:19.850 01:10:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 4165744 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.110 01:10:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.642 01:10:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.642 00:17:22.642 real 0m6.778s 00:17:22.642 user 0m11.491s 00:17:22.642 sys 0m2.582s 00:17:22.642 01:10:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.642 01:10:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:22.642 ************************************ 00:17:22.642 END TEST nvmf_bdevio_no_huge 00:17:22.642 ************************************ 00:17:22.642 01:10:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.642 01:10:38 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.642 01:10:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.642 01:10:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.642 01:10:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.642 ************************************ 00:17:22.642 START TEST nvmf_tls 00:17:22.642 ************************************ 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.642 * Looking for test storage... 00:17:22.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.642 01:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.537 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.537 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.537 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:24.538 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:24.538 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:24.538 Found net devices under 0000:09:00.0: cvl_0_0 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:24.538 Found net devices under 0000:09:00.1: cvl_0_1 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:24.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:17:24.538 00:17:24.538 --- 10.0.0.2 ping statistics --- 00:17:24.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.538 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:17:24.538 00:17:24.538 --- 10.0.0.1 ping statistics --- 00:17:24.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.538 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4167964 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4167964 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4167964 ']' 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.538 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.538 [2024-07-16 01:10:40.525608] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:24.538 [2024-07-16 01:10:40.525688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.796 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.796 [2024-07-16 01:10:40.590399] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.796 [2024-07-16 01:10:40.690378] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.796 [2024-07-16 01:10:40.690432] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.796 [2024-07-16 01:10:40.690459] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.796 [2024-07-16 01:10:40.690469] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.796 [2024-07-16 01:10:40.690478] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.796 [2024-07-16 01:10:40.690503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:24.796 01:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:25.053 true 00:17:25.053 01:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.053 01:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:25.310 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:25.310 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:25.310 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:25.566 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.566 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:25.822 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:25.822 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:25.822 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:26.079 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.079 01:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:26.337 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:26.337 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:26.337 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.337 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:26.595 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:26.595 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:26.595 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:26.853 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.853 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:27.112 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:27.112 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:27.112 01:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:27.370 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:27.370 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.jLrWVStUll 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5fQegtYqjh 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.jLrWVStUll 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5fQegtYqjh 00:17:27.629 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:27.886 01:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:28.452 01:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.jLrWVStUll 00:17:28.452 01:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jLrWVStUll 00:17:28.452 01:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:28.710 [2024-07-16 01:10:44.499462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.710 01:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:28.967 01:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:29.225 [2024-07-16 01:10:44.988805] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:29.225 [2024-07-16 01:10:44.989089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.225 01:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:29.483 malloc0 00:17:29.483 01:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:29.740 01:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jLrWVStUll 00:17:29.740 [2024-07-16 01:10:45.720670] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:29.997 01:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.jLrWVStUll 00:17:29.997 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.971 Initializing NVMe Controllers 00:17:39.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:39.971 Initialization complete. Launching workers. 00:17:39.971 ======================================================== 00:17:39.971 Latency(us) 00:17:39.971 Device Information : IOPS MiB/s Average min max 00:17:39.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8694.16 33.96 7363.22 1012.12 8942.50 00:17:39.971 ======================================================== 00:17:39.971 Total : 8694.16 33.96 7363.22 1012.12 8942.50 00:17:39.971 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jLrWVStUll 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jLrWVStUll' 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4169739 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4169739 /var/tmp/bdevperf.sock 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4169739 ']' 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.971 01:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.971 [2024-07-16 01:10:55.888399] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:39.971 [2024-07-16 01:10:55.888474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169739 ] 00:17:39.971 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.971 [2024-07-16 01:10:55.947628] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.228 [2024-07-16 01:10:56.060532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.228 01:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.228 01:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:40.228 01:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jLrWVStUll 00:17:40.484 [2024-07-16 01:10:56.420001] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.484 [2024-07-16 01:10:56.420132] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.741 TLSTESTn1 00:17:40.741 01:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:40.741 Running I/O for 10 seconds... 00:17:50.702 00:17:50.702 Latency(us) 00:17:50.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.702 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:50.702 Verification LBA range: start 0x0 length 0x2000 00:17:50.702 TLSTESTn1 : 10.03 3174.81 12.40 0.00 0.00 40229.48 10825.58 53205.52 00:17:50.702 =================================================================================================================== 00:17:50.702 Total : 3174.81 12.40 0.00 0.00 40229.48 10825.58 53205.52 00:17:50.702 0 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4169739 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4169739 ']' 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4169739 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4169739 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:50.702 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:50.961 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4169739' 00:17:50.961 killing process with pid 4169739 00:17:50.961 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4169739 00:17:50.961 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.961 00:17:50.961 Latency(us) 00:17:50.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.961 =================================================================================================================== 00:17:50.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.961 [2024-07-16 01:11:06.696903] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:50.961 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4169739 00:17:51.220 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5fQegtYqjh 00:17:51.220 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:51.220 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5fQegtYqjh 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5fQegtYqjh 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5fQegtYqjh' 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4171057 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4171057 /var/tmp/bdevperf.sock 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4171057 ']' 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.221 01:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.221 [2024-07-16 01:11:07.009635] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:51.221 [2024-07-16 01:11:07.009710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171057 ] 00:17:51.221 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.221 [2024-07-16 01:11:07.067056] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.221 [2024-07-16 01:11:07.171628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.479 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.479 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:51.479 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5fQegtYqjh 00:17:51.738 [2024-07-16 01:11:07.538010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.738 [2024-07-16 01:11:07.538122] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:51.738 [2024-07-16 01:11:07.547923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:51.738 [2024-07-16 01:11:07.548400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb60150 (107): Transport endpoint is not connected 00:17:51.738 [2024-07-16 01:11:07.549391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb60150 (9): Bad file descriptor 00:17:51.738 [2024-07-16 01:11:07.550391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:51.738 [2024-07-16 01:11:07.550411] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:51.738 [2024-07-16 01:11:07.550439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:51.738 request: 00:17:51.738 { 00:17:51.738 "name": "TLSTEST", 00:17:51.738 "trtype": "tcp", 00:17:51.738 "traddr": "10.0.0.2", 00:17:51.738 "adrfam": "ipv4", 00:17:51.738 "trsvcid": "4420", 00:17:51.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:51.738 "prchk_reftag": false, 00:17:51.738 "prchk_guard": false, 00:17:51.738 "hdgst": false, 00:17:51.738 "ddgst": false, 00:17:51.738 "psk": "/tmp/tmp.5fQegtYqjh", 00:17:51.738 "method": "bdev_nvme_attach_controller", 00:17:51.738 "req_id": 1 00:17:51.738 } 00:17:51.738 Got JSON-RPC error response 00:17:51.738 response: 00:17:51.738 { 00:17:51.738 "code": -5, 00:17:51.738 "message": "Input/output error" 00:17:51.738 } 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4171057 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4171057 ']' 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4171057 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171057 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:51.738 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171057' 00:17:51.738 killing process with pid 4171057 00:17:51.739 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4171057 00:17:51.739 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.739 00:17:51.739 Latency(us) 00:17:51.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.739 =================================================================================================================== 00:17:51.739 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:51.739 [2024-07-16 01:11:07.600225] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:51.739 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4171057 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jLrWVStUll 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jLrWVStUll 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jLrWVStUll 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jLrWVStUll' 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4171193 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:51.997 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.998 01:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4171193 /var/tmp/bdevperf.sock 00:17:51.998 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4171193 ']' 00:17:51.998 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.998 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.998 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.998 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.998 01:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.998 [2024-07-16 01:11:07.896599] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:51.998 [2024-07-16 01:11:07.896673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171193 ] 00:17:51.998 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.998 [2024-07-16 01:11:07.953303] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.256 [2024-07-16 01:11:08.059826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.256 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.256 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.256 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.jLrWVStUll 00:17:52.514 [2024-07-16 01:11:08.444225] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.514 [2024-07-16 01:11:08.444379] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:52.514 [2024-07-16 01:11:08.450120] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:52.514 [2024-07-16 01:11:08.450156] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:52.514 [2024-07-16 01:11:08.450205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:52.514 [2024-07-16 01:11:08.450633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc22150 (107): Transport endpoint is not connected 00:17:52.514 [2024-07-16 01:11:08.451622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc22150 (9): Bad file descriptor 00:17:52.514 [2024-07-16 01:11:08.452621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:52.514 [2024-07-16 01:11:08.452641] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:52.514 [2024-07-16 01:11:08.452670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:52.514 request: 00:17:52.514 { 00:17:52.514 "name": "TLSTEST", 00:17:52.514 "trtype": "tcp", 00:17:52.514 "traddr": "10.0.0.2", 00:17:52.514 "adrfam": "ipv4", 00:17:52.514 "trsvcid": "4420", 00:17:52.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.514 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:52.514 "prchk_reftag": false, 00:17:52.514 "prchk_guard": false, 00:17:52.514 "hdgst": false, 00:17:52.514 "ddgst": false, 00:17:52.514 "psk": "/tmp/tmp.jLrWVStUll", 00:17:52.514 "method": "bdev_nvme_attach_controller", 00:17:52.514 "req_id": 1 00:17:52.514 } 00:17:52.514 Got JSON-RPC error response 00:17:52.514 response: 00:17:52.514 { 00:17:52.514 "code": -5, 00:17:52.514 "message": "Input/output error" 00:17:52.514 } 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4171193 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4171193 ']' 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4171193 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171193 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171193' 00:17:52.514 killing process with pid 4171193 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4171193 00:17:52.514 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.514 00:17:52.514 Latency(us) 00:17:52.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.514 =================================================================================================================== 00:17:52.514 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.514 [2024-07-16 01:11:08.505178] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:52.514 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4171193 00:17:52.772 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:52.772 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:52.772 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.772 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.772 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.772 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jLrWVStUll 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jLrWVStUll 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jLrWVStUll 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jLrWVStUll' 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4171220 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4171220 /var/tmp/bdevperf.sock 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4171220 ']' 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.773 01:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.030 [2024-07-16 01:11:08.804554] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:53.030 [2024-07-16 01:11:08.804627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171220 ] 00:17:53.030 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.030 [2024-07-16 01:11:08.865916] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.030 [2024-07-16 01:11:08.970267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.288 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.288 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.288 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jLrWVStUll 00:17:53.545 [2024-07-16 01:11:09.351800] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.545 [2024-07-16 01:11:09.351947] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:53.545 [2024-07-16 01:11:09.362522] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:53.545 [2024-07-16 01:11:09.362556] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:53.545 [2024-07-16 01:11:09.362610] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:53.545 [2024-07-16 01:11:09.363330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161c150 (107): Transport endpoint is not connected 00:17:53.545 [2024-07-16 01:11:09.364320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161c150 (9): Bad file descriptor 00:17:53.545 [2024-07-16 01:11:09.365320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:53.545 [2024-07-16 01:11:09.365339] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:53.545 [2024-07-16 01:11:09.365353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:53.545 request: 00:17:53.545 { 00:17:53.545 "name": "TLSTEST", 00:17:53.545 "trtype": "tcp", 00:17:53.545 "traddr": "10.0.0.2", 00:17:53.545 "adrfam": "ipv4", 00:17:53.545 "trsvcid": "4420", 00:17:53.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:53.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.545 "prchk_reftag": false, 00:17:53.546 "prchk_guard": false, 00:17:53.546 "hdgst": false, 00:17:53.546 "ddgst": false, 00:17:53.546 "psk": "/tmp/tmp.jLrWVStUll", 00:17:53.546 "method": "bdev_nvme_attach_controller", 00:17:53.546 "req_id": 1 00:17:53.546 } 00:17:53.546 Got JSON-RPC error response 00:17:53.546 response: 00:17:53.546 { 00:17:53.546 "code": -5, 00:17:53.546 "message": "Input/output error" 00:17:53.546 } 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4171220 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4171220 ']' 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4171220 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171220 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171220' 00:17:53.546 killing process with pid 4171220 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4171220 00:17:53.546 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.546 00:17:53.546 Latency(us) 00:17:53.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.546 =================================================================================================================== 00:17:53.546 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.546 [2024-07-16 01:11:09.417626] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.546 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4171220 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4171350 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4171350 /var/tmp/bdevperf.sock 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4171350 ']' 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.803 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.803 [2024-07-16 01:11:09.723849] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:53.803 [2024-07-16 01:11:09.723924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171350 ] 00:17:53.803 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.803 [2024-07-16 01:11:09.784474] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.060 [2024-07-16 01:11:09.893844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.060 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.060 01:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.060 01:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:54.318 [2024-07-16 01:11:10.244905] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:54.318 [2024-07-16 01:11:10.246641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a910 (9): Bad file descriptor 00:17:54.318 [2024-07-16 01:11:10.247639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:54.318 [2024-07-16 01:11:10.247664] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:54.318 [2024-07-16 01:11:10.247693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.318 request: 00:17:54.318 { 00:17:54.318 "name": "TLSTEST", 00:17:54.318 "trtype": "tcp", 00:17:54.318 "traddr": "10.0.0.2", 00:17:54.318 "adrfam": "ipv4", 00:17:54.318 "trsvcid": "4420", 00:17:54.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.318 "prchk_reftag": false, 00:17:54.318 "prchk_guard": false, 00:17:54.318 "hdgst": false, 00:17:54.318 "ddgst": false, 00:17:54.318 "method": "bdev_nvme_attach_controller", 00:17:54.318 "req_id": 1 00:17:54.318 } 00:17:54.318 Got JSON-RPC error response 00:17:54.318 response: 00:17:54.318 { 00:17:54.318 "code": -5, 00:17:54.318 "message": "Input/output error" 00:17:54.318 } 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4171350 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4171350 ']' 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4171350 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171350 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171350' 00:17:54.318 killing process with pid 4171350 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4171350 00:17:54.318 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.318 00:17:54.318 Latency(us) 00:17:54.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.318 =================================================================================================================== 00:17:54.318 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.318 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4171350 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4167964 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4167964 ']' 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4167964 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.576 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4167964 00:17:54.833 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:54.833 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:54.833 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4167964' 00:17:54.833 killing process with pid 4167964 00:17:54.833 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4167964 00:17:54.833 [2024-07-16 01:11:10.587179] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:54.833 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4167964 00:17:55.090 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:55.090 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:55.090 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:55.090 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:55.090 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:55.090 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.PCWAyVlnnl 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.PCWAyVlnnl 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4171503 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4171503 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4171503 ']' 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.091 01:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.091 [2024-07-16 01:11:10.958036] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:55.091 [2024-07-16 01:11:10.958120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.091 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.091 [2024-07-16 01:11:11.022792] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.374 [2024-07-16 01:11:11.130605] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.374 [2024-07-16 01:11:11.130660] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.374 [2024-07-16 01:11:11.130686] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.374 [2024-07-16 01:11:11.130697] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.374 [2024-07-16 01:11:11.130706] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.374 [2024-07-16 01:11:11.130731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.PCWAyVlnnl 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PCWAyVlnnl 00:17:55.374 01:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:55.646 [2024-07-16 01:11:11.486333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.646 01:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:55.903 01:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:56.160 [2024-07-16 01:11:11.963558] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.160 [2024-07-16 01:11:11.963776] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.160 01:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:56.417 malloc0 00:17:56.417 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:56.674 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PCWAyVlnnl 00:17:56.932 [2024-07-16 01:11:12.692396] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PCWAyVlnnl 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PCWAyVlnnl' 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4171787 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4171787 /var/tmp/bdevperf.sock 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4171787 ']' 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.932 01:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.932 [2024-07-16 01:11:12.756607] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:17:56.932 [2024-07-16 01:11:12.756677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171787 ] 00:17:56.932 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.932 [2024-07-16 01:11:12.813292] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.932 [2024-07-16 01:11:12.917466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.189 01:11:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.189 01:11:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:57.189 01:11:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PCWAyVlnnl 00:17:57.446 [2024-07-16 01:11:13.248926] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.446 [2024-07-16 01:11:13.249075] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:57.446 TLSTESTn1 00:17:57.446 01:11:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:57.703 Running I/O for 10 seconds... 00:18:07.662 00:18:07.662 Latency(us) 00:18:07.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.662 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:07.662 Verification LBA range: start 0x0 length 0x2000 00:18:07.662 TLSTESTn1 : 10.03 3113.48 12.16 0.00 0.00 41022.97 9611.95 58254.22 00:18:07.662 =================================================================================================================== 00:18:07.662 Total : 3113.48 12.16 0.00 0.00 41022.97 9611.95 58254.22 00:18:07.662 0 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4171787 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4171787 ']' 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4171787 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171787 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171787' 00:18:07.662 killing process with pid 4171787 00:18:07.662 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4171787 00:18:07.662 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.662 00:18:07.662 Latency(us) 00:18:07.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.662 =================================================================================================================== 00:18:07.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.663 [2024-07-16 01:11:23.544386] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:07.663 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4171787 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.PCWAyVlnnl 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PCWAyVlnnl 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PCWAyVlnnl 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PCWAyVlnnl 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PCWAyVlnnl' 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4173106 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4173106 /var/tmp/bdevperf.sock 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4173106 ']' 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.920 01:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.920 [2024-07-16 01:11:23.831023] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:07.921 [2024-07-16 01:11:23.831103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173106 ] 00:18:07.921 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.921 [2024-07-16 01:11:23.888020] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.178 [2024-07-16 01:11:23.996650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.178 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.178 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:08.178 01:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PCWAyVlnnl 00:18:08.436 [2024-07-16 01:11:24.317127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.436 [2024-07-16 01:11:24.317221] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:08.436 [2024-07-16 01:11:24.317236] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.PCWAyVlnnl 00:18:08.436 request: 00:18:08.436 { 00:18:08.436 "name": "TLSTEST", 00:18:08.436 "trtype": "tcp", 00:18:08.436 "traddr": "10.0.0.2", 00:18:08.436 "adrfam": "ipv4", 00:18:08.436 "trsvcid": "4420", 00:18:08.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.436 "prchk_reftag": false, 00:18:08.436 "prchk_guard": false, 00:18:08.436 "hdgst": false, 00:18:08.436 "ddgst": false, 00:18:08.436 "psk": "/tmp/tmp.PCWAyVlnnl", 00:18:08.436 "method": "bdev_nvme_attach_controller", 00:18:08.436 "req_id": 1 00:18:08.436 } 00:18:08.436 Got JSON-RPC error response 00:18:08.436 response: 00:18:08.436 { 00:18:08.436 "code": -1, 00:18:08.436 "message": "Operation not permitted" 00:18:08.436 } 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4173106 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4173106 ']' 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4173106 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173106 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:08.436 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173106' 00:18:08.436 killing process with pid 4173106 00:18:08.437 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4173106 00:18:08.437 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.437 00:18:08.437 Latency(us) 00:18:08.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.437 =================================================================================================================== 00:18:08.437 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.437 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4173106 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4171503 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4171503 ']' 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4171503 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171503 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171503' 00:18:08.695 killing process with pid 4171503 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4171503 00:18:08.695 [2024-07-16 01:11:24.650275] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:08.695 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4171503 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4173247 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4173247 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4173247 ']' 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.955 01:11:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.214 [2024-07-16 01:11:24.974610] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:09.214 [2024-07-16 01:11:24.974695] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.214 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.214 [2024-07-16 01:11:25.035708] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.214 [2024-07-16 01:11:25.140125] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.214 [2024-07-16 01:11:25.140180] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.214 [2024-07-16 01:11:25.140208] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.214 [2024-07-16 01:11:25.140220] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.214 [2024-07-16 01:11:25.140230] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.214 [2024-07-16 01:11:25.140270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.PCWAyVlnnl 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PCWAyVlnnl 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.PCWAyVlnnl 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PCWAyVlnnl 00:18:09.472 01:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:09.730 [2024-07-16 01:11:25.558360] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.730 01:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:09.988 01:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:10.245 [2024-07-16 01:11:26.087753] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.245 [2024-07-16 01:11:26.088002] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.245 01:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:10.503 malloc0 00:18:10.503 01:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:10.761 01:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PCWAyVlnnl 00:18:11.018 [2024-07-16 01:11:26.880409] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:11.018 [2024-07-16 01:11:26.880442] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:11.018 [2024-07-16 01:11:26.880487] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:11.018 request: 00:18:11.018 { 00:18:11.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.018 "host": "nqn.2016-06.io.spdk:host1", 00:18:11.018 "psk": "/tmp/tmp.PCWAyVlnnl", 00:18:11.018 "method": "nvmf_subsystem_add_host", 00:18:11.018 "req_id": 1 00:18:11.018 } 00:18:11.018 Got JSON-RPC error response 00:18:11.018 response: 00:18:11.018 { 00:18:11.018 "code": -32603, 00:18:11.018 "message": "Internal error" 00:18:11.018 } 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4173247 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4173247 ']' 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4173247 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173247 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173247' 00:18:11.018 killing process with pid 4173247 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4173247 00:18:11.018 01:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4173247 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.PCWAyVlnnl 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4173543 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4173543 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4173543 ']' 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.275 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.275 [2024-07-16 01:11:27.228320] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:11.275 [2024-07-16 01:11:27.228391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.275 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.534 [2024-07-16 01:11:27.290801] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.534 [2024-07-16 01:11:27.397853] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.534 [2024-07-16 01:11:27.397920] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.534 [2024-07-16 01:11:27.397948] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.534 [2024-07-16 01:11:27.397967] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.534 [2024-07-16 01:11:27.397978] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.534 [2024-07-16 01:11:27.398021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.534 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.534 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:11.534 01:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.534 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.534 01:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.792 01:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.792 01:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.PCWAyVlnnl 00:18:11.792 01:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PCWAyVlnnl 00:18:11.792 01:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:11.792 [2024-07-16 01:11:27.754081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.792 01:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.357 01:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.357 [2024-07-16 01:11:28.299539] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.357 [2024-07-16 01:11:28.299766] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.357 01:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.615 malloc0 00:18:12.615 01:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.873 01:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PCWAyVlnnl 00:18:13.131 [2024-07-16 01:11:29.072380] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4173708 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4173708 /var/tmp/bdevperf.sock 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4173708 ']' 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.131 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.390 [2024-07-16 01:11:29.136051] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:13.390 [2024-07-16 01:11:29.136117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173708 ] 00:18:13.390 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.390 [2024-07-16 01:11:29.195396] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.390 [2024-07-16 01:11:29.302462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.648 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.648 01:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:13.648 01:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PCWAyVlnnl 00:18:13.648 [2024-07-16 01:11:29.624597] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.648 [2024-07-16 01:11:29.624722] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:13.906 TLSTESTn1 00:18:13.906 01:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:14.166 01:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:14.166 "subsystems": [ 00:18:14.166 { 00:18:14.166 "subsystem": "keyring", 00:18:14.166 "config": [] 00:18:14.166 }, 00:18:14.166 { 00:18:14.166 "subsystem": "iobuf", 00:18:14.166 "config": [ 00:18:14.166 { 00:18:14.166 "method": "iobuf_set_options", 00:18:14.166 "params": { 00:18:14.166 "small_pool_count": 8192, 00:18:14.166 "large_pool_count": 1024, 00:18:14.166 "small_bufsize": 8192, 00:18:14.166 "large_bufsize": 135168 00:18:14.166 } 00:18:14.166 } 00:18:14.166 ] 00:18:14.166 }, 00:18:14.166 { 00:18:14.166 "subsystem": "sock", 00:18:14.166 "config": [ 00:18:14.166 { 00:18:14.166 "method": "sock_set_default_impl", 00:18:14.166 "params": { 00:18:14.166 "impl_name": "posix" 00:18:14.166 } 00:18:14.166 }, 00:18:14.166 { 00:18:14.166 "method": "sock_impl_set_options", 00:18:14.166 "params": { 00:18:14.166 "impl_name": "ssl", 00:18:14.166 "recv_buf_size": 4096, 00:18:14.166 "send_buf_size": 4096, 00:18:14.166 "enable_recv_pipe": true, 00:18:14.166 "enable_quickack": false, 00:18:14.166 "enable_placement_id": 0, 00:18:14.166 "enable_zerocopy_send_server": true, 00:18:14.166 "enable_zerocopy_send_client": false, 00:18:14.166 "zerocopy_threshold": 0, 00:18:14.166 "tls_version": 0, 00:18:14.166 "enable_ktls": false 00:18:14.166 } 00:18:14.166 }, 00:18:14.166 { 00:18:14.166 "method": "sock_impl_set_options", 00:18:14.166 "params": { 00:18:14.166 "impl_name": "posix", 00:18:14.166 "recv_buf_size": 2097152, 00:18:14.166 "send_buf_size": 2097152, 00:18:14.166 "enable_recv_pipe": true, 00:18:14.166 "enable_quickack": false, 00:18:14.166 "enable_placement_id": 0, 00:18:14.166 "enable_zerocopy_send_server": true, 00:18:14.166 "enable_zerocopy_send_client": false, 00:18:14.166 "zerocopy_threshold": 0, 00:18:14.166 "tls_version": 0, 00:18:14.166 "enable_ktls": false 00:18:14.166 } 00:18:14.166 } 00:18:14.166 ] 00:18:14.166 }, 00:18:14.166 { 00:18:14.166 "subsystem": "vmd", 00:18:14.166 "config": [] 00:18:14.166 }, 00:18:14.166 { 00:18:14.166 "subsystem": "accel", 00:18:14.166 "config": [ 00:18:14.166 { 00:18:14.166 "method": "accel_set_options", 00:18:14.166 "params": { 00:18:14.166 "small_cache_size": 128, 00:18:14.166 "large_cache_size": 16, 00:18:14.166 "task_count": 2048, 00:18:14.166 "sequence_count": 2048, 00:18:14.166 "buf_count": 2048 00:18:14.166 } 00:18:14.166 } 00:18:14.166 ] 00:18:14.166 }, 00:18:14.166 { 00:18:14.167 "subsystem": "bdev", 00:18:14.167 "config": [ 00:18:14.167 { 00:18:14.167 "method": "bdev_set_options", 00:18:14.167 "params": { 00:18:14.167 "bdev_io_pool_size": 65535, 00:18:14.167 "bdev_io_cache_size": 256, 00:18:14.167 "bdev_auto_examine": true, 00:18:14.167 "iobuf_small_cache_size": 128, 00:18:14.167 "iobuf_large_cache_size": 16 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "bdev_raid_set_options", 00:18:14.167 "params": { 00:18:14.167 "process_window_size_kb": 1024 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "bdev_iscsi_set_options", 00:18:14.167 "params": { 00:18:14.167 "timeout_sec": 30 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "bdev_nvme_set_options", 00:18:14.167 "params": { 00:18:14.167 "action_on_timeout": "none", 00:18:14.167 "timeout_us": 0, 00:18:14.167 "timeout_admin_us": 0, 00:18:14.167 "keep_alive_timeout_ms": 10000, 00:18:14.167 "arbitration_burst": 0, 00:18:14.167 "low_priority_weight": 0, 00:18:14.167 "medium_priority_weight": 0, 00:18:14.167 "high_priority_weight": 0, 00:18:14.167 "nvme_adminq_poll_period_us": 10000, 00:18:14.167 "nvme_ioq_poll_period_us": 0, 00:18:14.167 "io_queue_requests": 0, 00:18:14.167 "delay_cmd_submit": true, 00:18:14.167 "transport_retry_count": 4, 00:18:14.167 "bdev_retry_count": 3, 00:18:14.167 "transport_ack_timeout": 0, 00:18:14.167 "ctrlr_loss_timeout_sec": 0, 00:18:14.167 "reconnect_delay_sec": 0, 00:18:14.167 "fast_io_fail_timeout_sec": 0, 00:18:14.167 "disable_auto_failback": false, 00:18:14.167 "generate_uuids": false, 00:18:14.167 "transport_tos": 0, 00:18:14.167 "nvme_error_stat": false, 00:18:14.167 "rdma_srq_size": 0, 00:18:14.167 "io_path_stat": false, 00:18:14.167 "allow_accel_sequence": false, 00:18:14.167 "rdma_max_cq_size": 0, 00:18:14.167 "rdma_cm_event_timeout_ms": 0, 00:18:14.167 "dhchap_digests": [ 00:18:14.167 "sha256", 00:18:14.167 "sha384", 00:18:14.167 "sha512" 00:18:14.167 ], 00:18:14.167 "dhchap_dhgroups": [ 00:18:14.167 "null", 00:18:14.167 "ffdhe2048", 00:18:14.167 "ffdhe3072", 00:18:14.167 "ffdhe4096", 00:18:14.167 "ffdhe6144", 00:18:14.167 "ffdhe8192" 00:18:14.167 ] 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "bdev_nvme_set_hotplug", 00:18:14.167 "params": { 00:18:14.167 "period_us": 100000, 00:18:14.167 "enable": false 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "bdev_malloc_create", 00:18:14.167 "params": { 00:18:14.167 "name": "malloc0", 00:18:14.167 "num_blocks": 8192, 00:18:14.167 "block_size": 4096, 00:18:14.167 "physical_block_size": 4096, 00:18:14.167 "uuid": "25ba3541-d21a-4b52-8e31-6b759b0c3f15", 00:18:14.167 "optimal_io_boundary": 0 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "bdev_wait_for_examine" 00:18:14.167 } 00:18:14.167 ] 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "subsystem": "nbd", 00:18:14.167 "config": [] 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "subsystem": "scheduler", 00:18:14.167 "config": [ 00:18:14.167 { 00:18:14.167 "method": "framework_set_scheduler", 00:18:14.167 "params": { 00:18:14.167 "name": "static" 00:18:14.167 } 00:18:14.167 } 00:18:14.167 ] 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "subsystem": "nvmf", 00:18:14.167 "config": [ 00:18:14.167 { 00:18:14.167 "method": "nvmf_set_config", 00:18:14.167 "params": { 00:18:14.167 "discovery_filter": "match_any", 00:18:14.167 "admin_cmd_passthru": { 00:18:14.167 "identify_ctrlr": false 00:18:14.167 } 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "nvmf_set_max_subsystems", 00:18:14.167 "params": { 00:18:14.167 "max_subsystems": 1024 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "nvmf_set_crdt", 00:18:14.167 "params": { 00:18:14.167 "crdt1": 0, 00:18:14.167 "crdt2": 0, 00:18:14.167 "crdt3": 0 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "nvmf_create_transport", 00:18:14.167 "params": { 00:18:14.167 "trtype": "TCP", 00:18:14.167 "max_queue_depth": 128, 00:18:14.167 "max_io_qpairs_per_ctrlr": 127, 00:18:14.167 "in_capsule_data_size": 4096, 00:18:14.167 "max_io_size": 131072, 00:18:14.167 "io_unit_size": 131072, 00:18:14.167 "max_aq_depth": 128, 00:18:14.167 "num_shared_buffers": 511, 00:18:14.167 "buf_cache_size": 4294967295, 00:18:14.167 "dif_insert_or_strip": false, 00:18:14.167 "zcopy": false, 00:18:14.167 "c2h_success": false, 00:18:14.167 "sock_priority": 0, 00:18:14.167 "abort_timeout_sec": 1, 00:18:14.167 "ack_timeout": 0, 00:18:14.167 "data_wr_pool_size": 0 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "nvmf_create_subsystem", 00:18:14.167 "params": { 00:18:14.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.167 "allow_any_host": false, 00:18:14.167 "serial_number": "SPDK00000000000001", 00:18:14.167 "model_number": "SPDK bdev Controller", 00:18:14.167 "max_namespaces": 10, 00:18:14.167 "min_cntlid": 1, 00:18:14.167 "max_cntlid": 65519, 00:18:14.167 "ana_reporting": false 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "nvmf_subsystem_add_host", 00:18:14.167 "params": { 00:18:14.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.167 "host": "nqn.2016-06.io.spdk:host1", 00:18:14.167 "psk": "/tmp/tmp.PCWAyVlnnl" 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "nvmf_subsystem_add_ns", 00:18:14.167 "params": { 00:18:14.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.167 "namespace": { 00:18:14.167 "nsid": 1, 00:18:14.167 "bdev_name": "malloc0", 00:18:14.167 "nguid": "25BA3541D21A4B528E316B759B0C3F15", 00:18:14.167 "uuid": "25ba3541-d21a-4b52-8e31-6b759b0c3f15", 00:18:14.167 "no_auto_visible": false 00:18:14.167 } 00:18:14.167 } 00:18:14.167 }, 00:18:14.167 { 00:18:14.167 "method": "nvmf_subsystem_add_listener", 00:18:14.167 "params": { 00:18:14.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.167 "listen_address": { 00:18:14.167 "trtype": "TCP", 00:18:14.167 "adrfam": "IPv4", 00:18:14.167 "traddr": "10.0.0.2", 00:18:14.167 "trsvcid": "4420" 00:18:14.167 }, 00:18:14.167 "secure_channel": true 00:18:14.167 } 00:18:14.167 } 00:18:14.167 ] 00:18:14.167 } 00:18:14.167 ] 00:18:14.167 }' 00:18:14.167 01:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:14.426 01:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:14.426 "subsystems": [ 00:18:14.426 { 00:18:14.426 "subsystem": "keyring", 00:18:14.426 "config": [] 00:18:14.426 }, 00:18:14.426 { 00:18:14.426 "subsystem": "iobuf", 00:18:14.426 "config": [ 00:18:14.426 { 00:18:14.426 "method": "iobuf_set_options", 00:18:14.426 "params": { 00:18:14.426 "small_pool_count": 8192, 00:18:14.426 "large_pool_count": 1024, 00:18:14.426 "small_bufsize": 8192, 00:18:14.426 "large_bufsize": 135168 00:18:14.426 } 00:18:14.426 } 00:18:14.426 ] 00:18:14.426 }, 00:18:14.426 { 00:18:14.426 "subsystem": "sock", 00:18:14.426 "config": [ 00:18:14.426 { 00:18:14.426 "method": "sock_set_default_impl", 00:18:14.426 "params": { 00:18:14.426 "impl_name": "posix" 00:18:14.426 } 00:18:14.426 }, 00:18:14.426 { 00:18:14.426 "method": "sock_impl_set_options", 00:18:14.426 "params": { 00:18:14.426 "impl_name": "ssl", 00:18:14.426 "recv_buf_size": 4096, 00:18:14.426 "send_buf_size": 4096, 00:18:14.426 "enable_recv_pipe": true, 00:18:14.426 "enable_quickack": false, 00:18:14.426 "enable_placement_id": 0, 00:18:14.426 "enable_zerocopy_send_server": true, 00:18:14.426 "enable_zerocopy_send_client": false, 00:18:14.426 "zerocopy_threshold": 0, 00:18:14.426 "tls_version": 0, 00:18:14.426 "enable_ktls": false 00:18:14.426 } 00:18:14.426 }, 00:18:14.426 { 00:18:14.426 "method": "sock_impl_set_options", 00:18:14.426 "params": { 00:18:14.426 "impl_name": "posix", 00:18:14.426 "recv_buf_size": 2097152, 00:18:14.426 "send_buf_size": 2097152, 00:18:14.426 "enable_recv_pipe": true, 00:18:14.426 "enable_quickack": false, 00:18:14.426 "enable_placement_id": 0, 00:18:14.426 "enable_zerocopy_send_server": true, 00:18:14.426 "enable_zerocopy_send_client": false, 00:18:14.427 "zerocopy_threshold": 0, 00:18:14.427 "tls_version": 0, 00:18:14.427 "enable_ktls": false 00:18:14.427 } 00:18:14.427 } 00:18:14.427 ] 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "subsystem": "vmd", 00:18:14.427 "config": [] 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "subsystem": "accel", 00:18:14.427 "config": [ 00:18:14.427 { 00:18:14.427 "method": "accel_set_options", 00:18:14.427 "params": { 00:18:14.427 "small_cache_size": 128, 00:18:14.427 "large_cache_size": 16, 00:18:14.427 "task_count": 2048, 00:18:14.427 "sequence_count": 2048, 00:18:14.427 "buf_count": 2048 00:18:14.427 } 00:18:14.427 } 00:18:14.427 ] 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "subsystem": "bdev", 00:18:14.427 "config": [ 00:18:14.427 { 00:18:14.427 "method": "bdev_set_options", 00:18:14.427 "params": { 00:18:14.427 "bdev_io_pool_size": 65535, 00:18:14.427 "bdev_io_cache_size": 256, 00:18:14.427 "bdev_auto_examine": true, 00:18:14.427 "iobuf_small_cache_size": 128, 00:18:14.427 "iobuf_large_cache_size": 16 00:18:14.427 } 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "method": "bdev_raid_set_options", 00:18:14.427 "params": { 00:18:14.427 "process_window_size_kb": 1024 00:18:14.427 } 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "method": "bdev_iscsi_set_options", 00:18:14.427 "params": { 00:18:14.427 "timeout_sec": 30 00:18:14.427 } 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "method": "bdev_nvme_set_options", 00:18:14.427 "params": { 00:18:14.427 "action_on_timeout": "none", 00:18:14.427 "timeout_us": 0, 00:18:14.427 "timeout_admin_us": 0, 00:18:14.427 "keep_alive_timeout_ms": 10000, 00:18:14.427 "arbitration_burst": 0, 00:18:14.427 "low_priority_weight": 0, 00:18:14.427 "medium_priority_weight": 0, 00:18:14.427 "high_priority_weight": 0, 00:18:14.427 "nvme_adminq_poll_period_us": 10000, 00:18:14.427 "nvme_ioq_poll_period_us": 0, 00:18:14.427 "io_queue_requests": 512, 00:18:14.427 "delay_cmd_submit": true, 00:18:14.427 "transport_retry_count": 4, 00:18:14.427 "bdev_retry_count": 3, 00:18:14.427 "transport_ack_timeout": 0, 00:18:14.427 "ctrlr_loss_timeout_sec": 0, 00:18:14.427 "reconnect_delay_sec": 0, 00:18:14.427 "fast_io_fail_timeout_sec": 0, 00:18:14.427 "disable_auto_failback": false, 00:18:14.427 "generate_uuids": false, 00:18:14.427 "transport_tos": 0, 00:18:14.427 "nvme_error_stat": false, 00:18:14.427 "rdma_srq_size": 0, 00:18:14.427 "io_path_stat": false, 00:18:14.427 "allow_accel_sequence": false, 00:18:14.427 "rdma_max_cq_size": 0, 00:18:14.427 "rdma_cm_event_timeout_ms": 0, 00:18:14.427 "dhchap_digests": [ 00:18:14.427 "sha256", 00:18:14.427 "sha384", 00:18:14.427 "sha512" 00:18:14.427 ], 00:18:14.427 "dhchap_dhgroups": [ 00:18:14.427 "null", 00:18:14.427 "ffdhe2048", 00:18:14.427 "ffdhe3072", 00:18:14.427 "ffdhe4096", 00:18:14.427 "ffdhe6144", 00:18:14.427 "ffdhe8192" 00:18:14.427 ] 00:18:14.427 } 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "method": "bdev_nvme_attach_controller", 00:18:14.427 "params": { 00:18:14.427 "name": "TLSTEST", 00:18:14.427 "trtype": "TCP", 00:18:14.427 "adrfam": "IPv4", 00:18:14.427 "traddr": "10.0.0.2", 00:18:14.427 "trsvcid": "4420", 00:18:14.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.427 "prchk_reftag": false, 00:18:14.427 "prchk_guard": false, 00:18:14.427 "ctrlr_loss_timeout_sec": 0, 00:18:14.427 "reconnect_delay_sec": 0, 00:18:14.427 "fast_io_fail_timeout_sec": 0, 00:18:14.427 "psk": "/tmp/tmp.PCWAyVlnnl", 00:18:14.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.427 "hdgst": false, 00:18:14.427 "ddgst": false 00:18:14.427 } 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "method": "bdev_nvme_set_hotplug", 00:18:14.427 "params": { 00:18:14.427 "period_us": 100000, 00:18:14.427 "enable": false 00:18:14.427 } 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "method": "bdev_wait_for_examine" 00:18:14.427 } 00:18:14.427 ] 00:18:14.427 }, 00:18:14.427 { 00:18:14.427 "subsystem": "nbd", 00:18:14.427 "config": [] 00:18:14.427 } 00:18:14.427 ] 00:18:14.427 }' 00:18:14.427 01:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4173708 00:18:14.427 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4173708 ']' 00:18:14.427 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4173708 00:18:14.427 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.427 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.427 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173708 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173708' 00:18:14.685 killing process with pid 4173708 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4173708 00:18:14.685 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.685 00:18:14.685 Latency(us) 00:18:14.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.685 =================================================================================================================== 00:18:14.685 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.685 [2024-07-16 01:11:30.421390] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4173708 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4173543 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4173543 ']' 00:18:14.685 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4173543 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173543 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173543' 00:18:14.943 killing process with pid 4173543 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4173543 00:18:14.943 [2024-07-16 01:11:30.710721] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:14.943 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4173543 00:18:15.201 01:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:15.201 01:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.201 01:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:15.201 "subsystems": [ 00:18:15.201 { 00:18:15.201 "subsystem": "keyring", 00:18:15.201 "config": [] 00:18:15.201 }, 00:18:15.201 { 00:18:15.201 "subsystem": "iobuf", 00:18:15.201 "config": [ 00:18:15.201 { 00:18:15.201 "method": "iobuf_set_options", 00:18:15.201 "params": { 00:18:15.201 "small_pool_count": 8192, 00:18:15.201 "large_pool_count": 1024, 00:18:15.201 "small_bufsize": 8192, 00:18:15.201 "large_bufsize": 135168 00:18:15.201 } 00:18:15.201 } 00:18:15.201 ] 00:18:15.201 }, 00:18:15.201 { 00:18:15.201 "subsystem": "sock", 00:18:15.201 "config": [ 00:18:15.201 { 00:18:15.201 "method": "sock_set_default_impl", 00:18:15.201 "params": { 00:18:15.201 "impl_name": "posix" 00:18:15.201 } 00:18:15.201 }, 00:18:15.201 { 00:18:15.201 "method": "sock_impl_set_options", 00:18:15.201 "params": { 00:18:15.201 "impl_name": "ssl", 00:18:15.201 "recv_buf_size": 4096, 00:18:15.201 "send_buf_size": 4096, 00:18:15.201 "enable_recv_pipe": true, 00:18:15.201 "enable_quickack": false, 00:18:15.201 "enable_placement_id": 0, 00:18:15.201 "enable_zerocopy_send_server": true, 00:18:15.201 "enable_zerocopy_send_client": false, 00:18:15.201 "zerocopy_threshold": 0, 00:18:15.201 "tls_version": 0, 00:18:15.201 "enable_ktls": false 00:18:15.201 } 00:18:15.201 }, 00:18:15.201 { 00:18:15.201 "method": "sock_impl_set_options", 00:18:15.201 "params": { 00:18:15.201 "impl_name": "posix", 00:18:15.201 "recv_buf_size": 2097152, 00:18:15.201 "send_buf_size": 2097152, 00:18:15.201 "enable_recv_pipe": true, 00:18:15.201 "enable_quickack": false, 00:18:15.201 "enable_placement_id": 0, 00:18:15.201 "enable_zerocopy_send_server": true, 00:18:15.201 "enable_zerocopy_send_client": false, 00:18:15.201 "zerocopy_threshold": 0, 00:18:15.201 "tls_version": 0, 00:18:15.201 "enable_ktls": false 00:18:15.201 } 00:18:15.201 } 00:18:15.201 ] 00:18:15.201 }, 00:18:15.201 { 00:18:15.201 "subsystem": "vmd", 00:18:15.201 "config": [] 00:18:15.201 }, 00:18:15.201 { 00:18:15.201 "subsystem": "accel", 00:18:15.201 "config": [ 00:18:15.201 { 00:18:15.201 "method": "accel_set_options", 00:18:15.201 "params": { 00:18:15.201 "small_cache_size": 128, 00:18:15.201 "large_cache_size": 16, 00:18:15.202 "task_count": 2048, 00:18:15.202 "sequence_count": 2048, 00:18:15.202 "buf_count": 2048 00:18:15.202 } 00:18:15.202 } 00:18:15.202 ] 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "subsystem": "bdev", 00:18:15.202 "config": [ 00:18:15.202 { 00:18:15.202 "method": "bdev_set_options", 00:18:15.202 "params": { 00:18:15.202 "bdev_io_pool_size": 65535, 00:18:15.202 "bdev_io_cache_size": 256, 00:18:15.202 "bdev_auto_examine": true, 00:18:15.202 "iobuf_small_cache_size": 128, 00:18:15.202 "iobuf_large_cache_size": 16 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "bdev_raid_set_options", 00:18:15.202 "params": { 00:18:15.202 "process_window_size_kb": 1024 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "bdev_iscsi_set_options", 00:18:15.202 "params": { 00:18:15.202 "timeout_sec": 30 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "bdev_nvme_set_options", 00:18:15.202 "params": { 00:18:15.202 "action_on_timeout": "none", 00:18:15.202 "timeout_us": 0, 00:18:15.202 "timeout_admin_us": 0, 00:18:15.202 "keep_alive_timeout_ms": 10000, 00:18:15.202 "arbitration_burst": 0, 00:18:15.202 "low_priority_weight": 0, 00:18:15.202 "medium_priority_weight": 0, 00:18:15.202 "high_priority_weight": 0, 00:18:15.202 "nvme_adminq_poll_period_us": 10000, 00:18:15.202 "nvme_ioq_poll_period_us": 0, 00:18:15.202 "io_queue_requests": 0, 00:18:15.202 "delay_cmd_submit": true, 00:18:15.202 "transport_retry_count": 4, 00:18:15.202 "bdev_retry_count": 3, 00:18:15.202 "transport_ack_timeout": 0, 00:18:15.202 "ctrlr_loss_timeout_sec": 0, 00:18:15.202 "reconnect_delay_sec": 0, 00:18:15.202 "fast_io_fail_timeout_sec": 0, 00:18:15.202 "disable_auto_failback": false, 00:18:15.202 "generate_uuids": false, 00:18:15.202 "transport_tos": 0, 00:18:15.202 "nvme_error_stat": false, 00:18:15.202 "rdma_srq_size": 0, 00:18:15.202 "io_path_stat": false, 00:18:15.202 "allow_accel_sequence": false, 00:18:15.202 "rdma_max_cq_size": 0, 00:18:15.202 "rdma_cm_event_timeout_ms": 0, 00:18:15.202 "dhchap_digests": [ 00:18:15.202 "sha256", 00:18:15.202 "sha384", 00:18:15.202 "sha512" 00:18:15.202 ], 00:18:15.202 "dhchap_dhgroups": [ 00:18:15.202 "null", 00:18:15.202 "ffdhe2048", 00:18:15.202 "ffdhe3072", 00:18:15.202 "ffdhe4096", 00:18:15.202 "ffdhe 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.202 6144", 00:18:15.202 "ffdhe8192" 00:18:15.202 ] 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "bdev_nvme_set_hotplug", 00:18:15.202 "params": { 00:18:15.202 "period_us": 100000, 00:18:15.202 "enable": false 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "bdev_malloc_create", 00:18:15.202 "params": { 00:18:15.202 "name": "malloc0", 00:18:15.202 "num_blocks": 8192, 00:18:15.202 "block_size": 4096, 00:18:15.202 "physical_block_size": 4096, 00:18:15.202 "uuid": "25ba3541-d21a-4b52-8e31-6b759b0c3f15", 00:18:15.202 "optimal_io_boundary": 0 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "bdev_wait_for_examine" 00:18:15.202 } 00:18:15.202 ] 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "subsystem": "nbd", 00:18:15.202 "config": [] 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "subsystem": "scheduler", 00:18:15.202 "config": [ 00:18:15.202 { 00:18:15.202 "method": "framework_set_scheduler", 00:18:15.202 "params": { 00:18:15.202 "name": "static" 00:18:15.202 } 00:18:15.202 } 00:18:15.202 ] 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "subsystem": "nvmf", 00:18:15.202 "config": [ 00:18:15.202 { 00:18:15.202 "method": "nvmf_set_config", 00:18:15.202 "params": { 00:18:15.202 "discovery_filter": "match_any", 00:18:15.202 "admin_cmd_passthru": { 00:18:15.202 "identify_ctrlr": false 00:18:15.202 } 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "nvmf_set_max_subsystems", 00:18:15.202 "params": { 00:18:15.202 "max_subsystems": 1024 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "nvmf_set_crdt", 00:18:15.202 "params": { 00:18:15.202 "crdt1": 0, 00:18:15.202 "crdt2": 0, 00:18:15.202 "crdt3": 0 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "nvmf_create_transport", 00:18:15.202 "params": { 00:18:15.202 "trtype": "TCP", 00:18:15.202 "max_queue_depth": 128, 00:18:15.202 "max_io_qpairs_per_ctrlr": 127, 00:18:15.202 "in_capsule_data_size": 4096, 00:18:15.202 "max_io_size": 131072, 00:18:15.202 "io_unit_size": 131072, 00:18:15.202 "max_aq_depth": 128, 00:18:15.202 "num_shared_buffers": 511, 00:18:15.202 "buf_cache_size": 4294967295, 00:18:15.202 "dif_insert_or_strip": false, 00:18:15.202 "zcopy": false, 00:18:15.202 "c2h_success": false, 00:18:15.202 "sock_priority": 0, 00:18:15.202 "abort_timeout_sec": 1, 00:18:15.202 "ack_timeout": 0, 00:18:15.202 "data_wr_pool_size": 0 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "nvmf_create_subsystem", 00:18:15.202 "params": { 00:18:15.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.202 "allow_any_host": false, 00:18:15.202 "serial_number": "SPDK00000000000001", 00:18:15.202 "model_number": "SPDK bdev Controller", 00:18:15.202 "max_namespaces": 10, 00:18:15.202 "min_cntlid": 1, 00:18:15.202 "max_cntlid": 65519, 00:18:15.202 "ana_reporting": false 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "nvmf_subsystem_add_host", 00:18:15.202 "params": { 00:18:15.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.202 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.202 "psk": "/tmp/tmp.PCWAyVlnnl" 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "nvmf_subsystem_add_ns", 00:18:15.202 "params": { 00:18:15.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.202 "namespace": { 00:18:15.202 "nsid": 1, 00:18:15.202 "bdev_name": "malloc0", 00:18:15.202 "nguid": "25BA3541D21A4B528E316B759B0C3F15", 00:18:15.202 "uuid": "25ba3541-d21a-4b52-8e31-6b759b0c3f15", 00:18:15.202 "no_auto_visible": false 00:18:15.202 } 00:18:15.202 } 00:18:15.202 }, 00:18:15.202 { 00:18:15.202 "method": "nvmf_subsystem_add_listener", 00:18:15.202 "params": { 00:18:15.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.202 "listen_address": { 00:18:15.202 "trtype": "TCP", 00:18:15.202 "adrfam": "IPv4", 00:18:15.202 "traddr": "10.0.0.2", 00:18:15.202 "trsvcid": "4420" 00:18:15.202 }, 00:18:15.202 "secure_channel": true 00:18:15.202 } 00:18:15.202 } 00:18:15.202 ] 00:18:15.202 } 00:18:15.202 ] 00:18:15.202 }' 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4173984 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4173984 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4173984 ']' 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.202 01:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.202 [2024-07-16 01:11:31.039834] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:15.202 [2024-07-16 01:11:31.039917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.202 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.202 [2024-07-16 01:11:31.102086] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.460 [2024-07-16 01:11:31.199512] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.460 [2024-07-16 01:11:31.199561] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.460 [2024-07-16 01:11:31.199589] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.460 [2024-07-16 01:11:31.199600] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.460 [2024-07-16 01:11:31.199609] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.460 [2024-07-16 01:11:31.199687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.460 [2024-07-16 01:11:31.427104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.460 [2024-07-16 01:11:31.443059] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:15.717 [2024-07-16 01:11:31.459108] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.717 [2024-07-16 01:11:31.480158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.974 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.974 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.974 01:11:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.974 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.974 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.232 01:11:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.232 01:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4174135 00:18:16.232 01:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4174135 /var/tmp/bdevperf.sock 00:18:16.232 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4174135 ']' 00:18:16.232 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.232 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.233 01:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:16.233 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.233 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.233 01:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:16.233 "subsystems": [ 00:18:16.233 { 00:18:16.233 "subsystem": "keyring", 00:18:16.233 "config": [] 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "subsystem": "iobuf", 00:18:16.233 "config": [ 00:18:16.233 { 00:18:16.233 "method": "iobuf_set_options", 00:18:16.233 "params": { 00:18:16.233 "small_pool_count": 8192, 00:18:16.233 "large_pool_count": 1024, 00:18:16.233 "small_bufsize": 8192, 00:18:16.233 "large_bufsize": 135168 00:18:16.233 } 00:18:16.233 } 00:18:16.233 ] 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "subsystem": "sock", 00:18:16.233 "config": [ 00:18:16.233 { 00:18:16.233 "method": "sock_set_default_impl", 00:18:16.233 "params": { 00:18:16.233 "impl_name": "posix" 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "sock_impl_set_options", 00:18:16.233 "params": { 00:18:16.233 "impl_name": "ssl", 00:18:16.233 "recv_buf_size": 4096, 00:18:16.233 "send_buf_size": 4096, 00:18:16.233 "enable_recv_pipe": true, 00:18:16.233 "enable_quickack": false, 00:18:16.233 "enable_placement_id": 0, 00:18:16.233 "enable_zerocopy_send_server": true, 00:18:16.233 "enable_zerocopy_send_client": false, 00:18:16.233 "zerocopy_threshold": 0, 00:18:16.233 "tls_version": 0, 00:18:16.233 "enable_ktls": false 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "sock_impl_set_options", 00:18:16.233 "params": { 00:18:16.233 "impl_name": "posix", 00:18:16.233 "recv_buf_size": 2097152, 00:18:16.233 "send_buf_size": 2097152, 00:18:16.233 "enable_recv_pipe": true, 00:18:16.233 "enable_quickack": false, 00:18:16.233 "enable_placement_id": 0, 00:18:16.233 "enable_zerocopy_send_server": true, 00:18:16.233 "enable_zerocopy_send_client": false, 00:18:16.233 "zerocopy_threshold": 0, 00:18:16.233 "tls_version": 0, 00:18:16.233 "enable_ktls": false 00:18:16.233 } 00:18:16.233 } 00:18:16.233 ] 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "subsystem": "vmd", 00:18:16.233 "config": [] 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "subsystem": "accel", 00:18:16.233 "config": [ 00:18:16.233 { 00:18:16.233 "method": "accel_set_options", 00:18:16.233 "params": { 00:18:16.233 "small_cache_size": 128, 00:18:16.233 "large_cache_size": 16, 00:18:16.233 "task_count": 2048, 00:18:16.233 "sequence_count": 2048, 00:18:16.233 "buf_count": 2048 00:18:16.233 } 00:18:16.233 } 00:18:16.233 ] 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "subsystem": "bdev", 00:18:16.233 "config": [ 00:18:16.233 { 00:18:16.233 "method": "bdev_set_options", 00:18:16.233 "params": { 00:18:16.233 "bdev_io_pool_size": 65535, 00:18:16.233 "bdev_io_cache_size": 256, 00:18:16.233 "bdev_auto_examine": true, 00:18:16.233 "iobuf_small_cache_size": 128, 00:18:16.233 "iobuf_large_cache_size": 16 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "bdev_raid_set_options", 00:18:16.233 "params": { 00:18:16.233 "process_window_size_kb": 1024 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "bdev_iscsi_set_options", 00:18:16.233 "params": { 00:18:16.233 "timeout_sec": 30 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "bdev_nvme_set_options", 00:18:16.233 "params": { 00:18:16.233 "action_on_timeout": "none", 00:18:16.233 "timeout_us": 0, 00:18:16.233 "timeout_admin_us": 0, 00:18:16.233 "keep_alive_timeout_ms": 10000, 00:18:16.233 "arbitration_burst": 0, 00:18:16.233 "low_priority_weight": 0, 00:18:16.233 "medium_priority_weight": 0, 00:18:16.233 "high_priority_weight": 0, 00:18:16.233 "nvme_adminq_poll_period_us": 10000, 00:18:16.233 "nvme_ioq_poll_period_us": 0, 00:18:16.233 "io_queue_requests": 512, 00:18:16.233 "delay_cmd_submit": true, 00:18:16.233 "transport_retry_count": 4, 00:18:16.233 "bdev_retry_count": 3, 00:18:16.233 "transport_ack_timeout": 0, 00:18:16.233 "ctrlr_loss_timeout_sec": 0, 00:18:16.233 "reconnect_delay_sec": 0, 00:18:16.233 "fast_io_fail_timeout_sec": 0, 00:18:16.233 "disable_auto_failback": false, 00:18:16.233 "generate_uuids": false, 00:18:16.233 "transport_tos": 0, 00:18:16.233 "nvme_error_stat": false, 00:18:16.233 "rdma_srq_size": 0, 00:18:16.233 "io_path_stat": false, 00:18:16.233 "allow_accel_sequence": false, 00:18:16.233 "rdma_max_cq_size": 0, 00:18:16.233 "rdma_cm_event_timeout_ms": 0, 00:18:16.233 "dhchap_digests": [ 00:18:16.233 "sha256", 00:18:16.233 "sha384", 00:18:16.233 "sha512" 00:18:16.233 ], 00:18:16.233 "dhchap_dhgroups": [ 00:18:16.233 "null", 00:18:16.233 "ffdhe2048", 00:18:16.233 "ffdhe3072", 00:18:16.233 "ffdhe4096", 00:18:16.233 "ffd 01:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.233 he6144", 00:18:16.233 "ffdhe8192" 00:18:16.233 ] 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "bdev_nvme_attach_controller", 00:18:16.233 "params": { 00:18:16.233 "name": "TLSTEST", 00:18:16.233 "trtype": "TCP", 00:18:16.233 "adrfam": "IPv4", 00:18:16.233 "traddr": "10.0.0.2", 00:18:16.233 "trsvcid": "4420", 00:18:16.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.233 "prchk_reftag": false, 00:18:16.233 "prchk_guard": false, 00:18:16.233 "ctrlr_loss_timeout_sec": 0, 00:18:16.233 "reconnect_delay_sec": 0, 00:18:16.233 "fast_io_fail_timeout_sec": 0, 00:18:16.233 "psk": "/tmp/tmp.PCWAyVlnnl", 00:18:16.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.233 "hdgst": false, 00:18:16.233 "ddgst": false 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "bdev_nvme_set_hotplug", 00:18:16.233 "params": { 00:18:16.233 "period_us": 100000, 00:18:16.233 "enable": false 00:18:16.233 } 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "method": "bdev_wait_for_examine" 00:18:16.233 } 00:18:16.233 ] 00:18:16.233 }, 00:18:16.233 { 00:18:16.233 "subsystem": "nbd", 00:18:16.233 "config": [] 00:18:16.233 } 00:18:16.233 ] 00:18:16.233 }' 00:18:16.234 [2024-07-16 01:11:32.017123] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:16.234 [2024-07-16 01:11:32.017214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174135 ] 00:18:16.234 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.234 [2024-07-16 01:11:32.075769] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.234 [2024-07-16 01:11:32.183672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.491 [2024-07-16 01:11:32.351335] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.491 [2024-07-16 01:11:32.351469] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:17.054 01:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.054 01:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.054 01:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:17.311 Running I/O for 10 seconds... 00:18:27.272 00:18:27.272 Latency(us) 00:18:27.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.272 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:27.272 Verification LBA range: start 0x0 length 0x2000 00:18:27.272 TLSTESTn1 : 10.02 3421.54 13.37 0.00 0.00 37343.11 7573.05 40195.41 00:18:27.272 =================================================================================================================== 00:18:27.272 Total : 3421.54 13.37 0.00 0.00 37343.11 7573.05 40195.41 00:18:27.272 0 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4174135 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4174135 ']' 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4174135 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4174135 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4174135' 00:18:27.272 killing process with pid 4174135 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4174135 00:18:27.272 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.272 00:18:27.272 Latency(us) 00:18:27.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.272 =================================================================================================================== 00:18:27.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.272 [2024-07-16 01:11:43.173969] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:27.272 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4174135 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4173984 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4173984 ']' 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4173984 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173984 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173984' 00:18:27.530 killing process with pid 4173984 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4173984 00:18:27.530 [2024-07-16 01:11:43.461673] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:27.530 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4173984 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4175467 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4175467 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4175467 ']' 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.787 01:11:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.045 [2024-07-16 01:11:43.792148] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:28.045 [2024-07-16 01:11:43.792239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.045 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.045 [2024-07-16 01:11:43.854862] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.045 [2024-07-16 01:11:43.958949] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.045 [2024-07-16 01:11:43.959009] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.045 [2024-07-16 01:11:43.959033] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.045 [2024-07-16 01:11:43.959045] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.045 [2024-07-16 01:11:43.959055] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.045 [2024-07-16 01:11:43.959081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.PCWAyVlnnl 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PCWAyVlnnl 00:18:28.302 01:11:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.559 [2024-07-16 01:11:44.364205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.559 01:11:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.816 01:11:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:29.097 [2024-07-16 01:11:44.913718] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.097 [2024-07-16 01:11:44.913998] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.097 01:11:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:29.361 malloc0 00:18:29.361 01:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.618 01:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PCWAyVlnnl 00:18:29.876 [2024-07-16 01:11:45.695182] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4175754 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4175754 /var/tmp/bdevperf.sock 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4175754 ']' 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.876 01:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.876 [2024-07-16 01:11:45.758982] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:29.877 [2024-07-16 01:11:45.759067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175754 ] 00:18:29.877 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.877 [2024-07-16 01:11:45.817384] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.145 [2024-07-16 01:11:45.923084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.145 01:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.145 01:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.145 01:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PCWAyVlnnl 00:18:30.402 01:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:30.659 [2024-07-16 01:11:46.607170] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.915 nvme0n1 00:18:30.915 01:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.915 Running I/O for 1 seconds... 00:18:31.843 00:18:31.843 Latency(us) 00:18:31.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.843 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:31.843 Verification LBA range: start 0x0 length 0x2000 00:18:31.843 nvme0n1 : 1.02 3496.67 13.66 0.00 0.00 36252.03 7475.96 27379.48 00:18:31.843 =================================================================================================================== 00:18:31.843 Total : 3496.67 13.66 0.00 0.00 36252.03 7475.96 27379.48 00:18:31.843 0 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4175754 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4175754 ']' 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4175754 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4175754 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4175754' 00:18:32.100 killing process with pid 4175754 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4175754 00:18:32.100 Received shutdown signal, test time was about 1.000000 seconds 00:18:32.100 00:18:32.100 Latency(us) 00:18:32.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.100 =================================================================================================================== 00:18:32.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.100 01:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4175754 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4175467 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4175467 ']' 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4175467 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4175467 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4175467' 00:18:32.357 killing process with pid 4175467 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4175467 00:18:32.357 [2024-07-16 01:11:48.169126] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:32.357 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4175467 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4176030 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4176030 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4176030 ']' 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.615 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.615 [2024-07-16 01:11:48.491913] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:32.615 [2024-07-16 01:11:48.492020] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.615 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.615 [2024-07-16 01:11:48.556589] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.873 [2024-07-16 01:11:48.665651] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.873 [2024-07-16 01:11:48.665708] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.873 [2024-07-16 01:11:48.665732] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.873 [2024-07-16 01:11:48.665743] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.873 [2024-07-16 01:11:48.665753] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.873 [2024-07-16 01:11:48.665783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.873 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.873 [2024-07-16 01:11:48.810509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.873 malloc0 00:18:32.873 [2024-07-16 01:11:48.842352] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.874 [2024-07-16 01:11:48.842601] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=4176174 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 4176174 /var/tmp/bdevperf.sock 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4176174 ']' 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:33.132 01:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.132 [2024-07-16 01:11:48.916656] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:33.132 [2024-07-16 01:11:48.916735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176174 ] 00:18:33.132 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.132 [2024-07-16 01:11:48.976010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.132 [2024-07-16 01:11:49.084994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.390 01:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.390 01:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.390 01:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PCWAyVlnnl 00:18:33.646 01:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:33.903 [2024-07-16 01:11:49.708671] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.903 nvme0n1 00:18:33.903 01:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.903 Running I/O for 1 seconds... 00:18:35.276 00:18:35.276 Latency(us) 00:18:35.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.276 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.276 Verification LBA range: start 0x0 length 0x2000 00:18:35.276 nvme0n1 : 1.02 3266.61 12.76 0.00 0.00 38792.42 9757.58 44467.39 00:18:35.276 =================================================================================================================== 00:18:35.276 Total : 3266.61 12.76 0.00 0.00 38792.42 9757.58 44467.39 00:18:35.276 0 00:18:35.276 01:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:35.276 01:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.276 01:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.276 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.276 01:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:35.276 "subsystems": [ 00:18:35.276 { 00:18:35.276 "subsystem": "keyring", 00:18:35.276 "config": [ 00:18:35.276 { 00:18:35.276 "method": "keyring_file_add_key", 00:18:35.276 "params": { 00:18:35.276 "name": "key0", 00:18:35.276 "path": "/tmp/tmp.PCWAyVlnnl" 00:18:35.276 } 00:18:35.276 } 00:18:35.276 ] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "iobuf", 00:18:35.276 "config": [ 00:18:35.276 { 00:18:35.276 "method": "iobuf_set_options", 00:18:35.276 "params": { 00:18:35.276 "small_pool_count": 8192, 00:18:35.276 "large_pool_count": 1024, 00:18:35.276 "small_bufsize": 8192, 00:18:35.276 "large_bufsize": 135168 00:18:35.276 } 00:18:35.276 } 00:18:35.276 ] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "sock", 00:18:35.276 "config": [ 00:18:35.276 { 00:18:35.276 "method": "sock_set_default_impl", 00:18:35.276 "params": { 00:18:35.276 "impl_name": "posix" 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "sock_impl_set_options", 00:18:35.276 "params": { 00:18:35.276 "impl_name": "ssl", 00:18:35.276 "recv_buf_size": 4096, 00:18:35.276 "send_buf_size": 4096, 00:18:35.276 "enable_recv_pipe": true, 00:18:35.276 "enable_quickack": false, 00:18:35.276 "enable_placement_id": 0, 00:18:35.276 "enable_zerocopy_send_server": true, 00:18:35.276 "enable_zerocopy_send_client": false, 00:18:35.276 "zerocopy_threshold": 0, 00:18:35.276 "tls_version": 0, 00:18:35.276 "enable_ktls": false 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "sock_impl_set_options", 00:18:35.276 "params": { 00:18:35.276 "impl_name": "posix", 00:18:35.276 "recv_buf_size": 2097152, 00:18:35.276 "send_buf_size": 2097152, 00:18:35.276 "enable_recv_pipe": true, 00:18:35.276 "enable_quickack": false, 00:18:35.276 "enable_placement_id": 0, 00:18:35.276 "enable_zerocopy_send_server": true, 00:18:35.276 "enable_zerocopy_send_client": false, 00:18:35.276 "zerocopy_threshold": 0, 00:18:35.276 "tls_version": 0, 00:18:35.276 "enable_ktls": false 00:18:35.276 } 00:18:35.276 } 00:18:35.276 ] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "vmd", 00:18:35.276 "config": [] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "accel", 00:18:35.276 "config": [ 00:18:35.276 { 00:18:35.276 "method": "accel_set_options", 00:18:35.276 "params": { 00:18:35.276 "small_cache_size": 128, 00:18:35.276 "large_cache_size": 16, 00:18:35.276 "task_count": 2048, 00:18:35.276 "sequence_count": 2048, 00:18:35.276 "buf_count": 2048 00:18:35.276 } 00:18:35.276 } 00:18:35.276 ] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "bdev", 00:18:35.276 "config": [ 00:18:35.276 { 00:18:35.276 "method": "bdev_set_options", 00:18:35.276 "params": { 00:18:35.276 "bdev_io_pool_size": 65535, 00:18:35.276 "bdev_io_cache_size": 256, 00:18:35.276 "bdev_auto_examine": true, 00:18:35.276 "iobuf_small_cache_size": 128, 00:18:35.276 "iobuf_large_cache_size": 16 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "bdev_raid_set_options", 00:18:35.276 "params": { 00:18:35.276 "process_window_size_kb": 1024 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "bdev_iscsi_set_options", 00:18:35.276 "params": { 00:18:35.276 "timeout_sec": 30 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "bdev_nvme_set_options", 00:18:35.276 "params": { 00:18:35.276 "action_on_timeout": "none", 00:18:35.276 "timeout_us": 0, 00:18:35.276 "timeout_admin_us": 0, 00:18:35.276 "keep_alive_timeout_ms": 10000, 00:18:35.276 "arbitration_burst": 0, 00:18:35.276 "low_priority_weight": 0, 00:18:35.276 "medium_priority_weight": 0, 00:18:35.276 "high_priority_weight": 0, 00:18:35.276 "nvme_adminq_poll_period_us": 10000, 00:18:35.276 "nvme_ioq_poll_period_us": 0, 00:18:35.276 "io_queue_requests": 0, 00:18:35.276 "delay_cmd_submit": true, 00:18:35.276 "transport_retry_count": 4, 00:18:35.276 "bdev_retry_count": 3, 00:18:35.276 "transport_ack_timeout": 0, 00:18:35.276 "ctrlr_loss_timeout_sec": 0, 00:18:35.276 "reconnect_delay_sec": 0, 00:18:35.276 "fast_io_fail_timeout_sec": 0, 00:18:35.276 "disable_auto_failback": false, 00:18:35.276 "generate_uuids": false, 00:18:35.276 "transport_tos": 0, 00:18:35.276 "nvme_error_stat": false, 00:18:35.276 "rdma_srq_size": 0, 00:18:35.276 "io_path_stat": false, 00:18:35.276 "allow_accel_sequence": false, 00:18:35.276 "rdma_max_cq_size": 0, 00:18:35.276 "rdma_cm_event_timeout_ms": 0, 00:18:35.276 "dhchap_digests": [ 00:18:35.276 "sha256", 00:18:35.276 "sha384", 00:18:35.276 "sha512" 00:18:35.276 ], 00:18:35.276 "dhchap_dhgroups": [ 00:18:35.276 "null", 00:18:35.276 "ffdhe2048", 00:18:35.276 "ffdhe3072", 00:18:35.276 "ffdhe4096", 00:18:35.276 "ffdhe6144", 00:18:35.276 "ffdhe8192" 00:18:35.276 ] 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "bdev_nvme_set_hotplug", 00:18:35.276 "params": { 00:18:35.276 "period_us": 100000, 00:18:35.276 "enable": false 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "bdev_malloc_create", 00:18:35.276 "params": { 00:18:35.276 "name": "malloc0", 00:18:35.276 "num_blocks": 8192, 00:18:35.276 "block_size": 4096, 00:18:35.276 "physical_block_size": 4096, 00:18:35.276 "uuid": "7900c7c5-8a91-4070-b75c-043e0cc662f9", 00:18:35.276 "optimal_io_boundary": 0 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "bdev_wait_for_examine" 00:18:35.276 } 00:18:35.276 ] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "nbd", 00:18:35.276 "config": [] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "scheduler", 00:18:35.276 "config": [ 00:18:35.276 { 00:18:35.276 "method": "framework_set_scheduler", 00:18:35.276 "params": { 00:18:35.276 "name": "static" 00:18:35.276 } 00:18:35.276 } 00:18:35.276 ] 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "subsystem": "nvmf", 00:18:35.276 "config": [ 00:18:35.276 { 00:18:35.276 "method": "nvmf_set_config", 00:18:35.276 "params": { 00:18:35.276 "discovery_filter": "match_any", 00:18:35.276 "admin_cmd_passthru": { 00:18:35.276 "identify_ctrlr": false 00:18:35.276 } 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "nvmf_set_max_subsystems", 00:18:35.276 "params": { 00:18:35.276 "max_subsystems": 1024 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "nvmf_set_crdt", 00:18:35.276 "params": { 00:18:35.276 "crdt1": 0, 00:18:35.276 "crdt2": 0, 00:18:35.276 "crdt3": 0 00:18:35.276 } 00:18:35.276 }, 00:18:35.276 { 00:18:35.276 "method": "nvmf_create_transport", 00:18:35.276 "params": { 00:18:35.277 "trtype": "TCP", 00:18:35.277 "max_queue_depth": 128, 00:18:35.277 "max_io_qpairs_per_ctrlr": 127, 00:18:35.277 "in_capsule_data_size": 4096, 00:18:35.277 "max_io_size": 131072, 00:18:35.277 "io_unit_size": 131072, 00:18:35.277 "max_aq_depth": 128, 00:18:35.277 "num_shared_buffers": 511, 00:18:35.277 "buf_cache_size": 4294967295, 00:18:35.277 "dif_insert_or_strip": false, 00:18:35.277 "zcopy": false, 00:18:35.277 "c2h_success": false, 00:18:35.277 "sock_priority": 0, 00:18:35.277 "abort_timeout_sec": 1, 00:18:35.277 "ack_timeout": 0, 00:18:35.277 "data_wr_pool_size": 0 00:18:35.277 } 00:18:35.277 }, 00:18:35.277 { 00:18:35.277 "method": "nvmf_create_subsystem", 00:18:35.277 "params": { 00:18:35.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.277 "allow_any_host": false, 00:18:35.277 "serial_number": "00000000000000000000", 00:18:35.277 "model_number": "SPDK bdev Controller", 00:18:35.277 "max_namespaces": 32, 00:18:35.277 "min_cntlid": 1, 00:18:35.277 "max_cntlid": 65519, 00:18:35.277 "ana_reporting": false 00:18:35.277 } 00:18:35.277 }, 00:18:35.277 { 00:18:35.277 "method": "nvmf_subsystem_add_host", 00:18:35.277 "params": { 00:18:35.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.277 "host": "nqn.2016-06.io.spdk:host1", 00:18:35.277 "psk": "key0" 00:18:35.277 } 00:18:35.277 }, 00:18:35.277 { 00:18:35.277 "method": "nvmf_subsystem_add_ns", 00:18:35.277 "params": { 00:18:35.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.277 "namespace": { 00:18:35.277 "nsid": 1, 00:18:35.277 "bdev_name": "malloc0", 00:18:35.277 "nguid": "7900C7C58A914070B75C043E0CC662F9", 00:18:35.277 "uuid": "7900c7c5-8a91-4070-b75c-043e0cc662f9", 00:18:35.277 "no_auto_visible": false 00:18:35.277 } 00:18:35.277 } 00:18:35.277 }, 00:18:35.277 { 00:18:35.277 "method": "nvmf_subsystem_add_listener", 00:18:35.277 "params": { 00:18:35.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.277 "listen_address": { 00:18:35.277 "trtype": "TCP", 00:18:35.277 "adrfam": "IPv4", 00:18:35.277 "traddr": "10.0.0.2", 00:18:35.277 "trsvcid": "4420" 00:18:35.277 }, 00:18:35.277 "secure_channel": false, 00:18:35.277 "sock_impl": "ssl" 00:18:35.277 } 00:18:35.277 } 00:18:35.277 ] 00:18:35.277 } 00:18:35.277 ] 00:18:35.277 }' 00:18:35.277 01:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:35.534 01:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:35.534 "subsystems": [ 00:18:35.534 { 00:18:35.534 "subsystem": "keyring", 00:18:35.534 "config": [ 00:18:35.534 { 00:18:35.534 "method": "keyring_file_add_key", 00:18:35.534 "params": { 00:18:35.534 "name": "key0", 00:18:35.534 "path": "/tmp/tmp.PCWAyVlnnl" 00:18:35.534 } 00:18:35.534 } 00:18:35.534 ] 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "subsystem": "iobuf", 00:18:35.534 "config": [ 00:18:35.534 { 00:18:35.534 "method": "iobuf_set_options", 00:18:35.534 "params": { 00:18:35.534 "small_pool_count": 8192, 00:18:35.534 "large_pool_count": 1024, 00:18:35.534 "small_bufsize": 8192, 00:18:35.534 "large_bufsize": 135168 00:18:35.534 } 00:18:35.534 } 00:18:35.534 ] 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "subsystem": "sock", 00:18:35.534 "config": [ 00:18:35.534 { 00:18:35.534 "method": "sock_set_default_impl", 00:18:35.534 "params": { 00:18:35.534 "impl_name": "posix" 00:18:35.534 } 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "method": "sock_impl_set_options", 00:18:35.534 "params": { 00:18:35.534 "impl_name": "ssl", 00:18:35.534 "recv_buf_size": 4096, 00:18:35.534 "send_buf_size": 4096, 00:18:35.534 "enable_recv_pipe": true, 00:18:35.534 "enable_quickack": false, 00:18:35.534 "enable_placement_id": 0, 00:18:35.534 "enable_zerocopy_send_server": true, 00:18:35.534 "enable_zerocopy_send_client": false, 00:18:35.534 "zerocopy_threshold": 0, 00:18:35.534 "tls_version": 0, 00:18:35.534 "enable_ktls": false 00:18:35.534 } 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "method": "sock_impl_set_options", 00:18:35.534 "params": { 00:18:35.534 "impl_name": "posix", 00:18:35.534 "recv_buf_size": 2097152, 00:18:35.534 "send_buf_size": 2097152, 00:18:35.534 "enable_recv_pipe": true, 00:18:35.534 "enable_quickack": false, 00:18:35.534 "enable_placement_id": 0, 00:18:35.534 "enable_zerocopy_send_server": true, 00:18:35.534 "enable_zerocopy_send_client": false, 00:18:35.534 "zerocopy_threshold": 0, 00:18:35.534 "tls_version": 0, 00:18:35.534 "enable_ktls": false 00:18:35.534 } 00:18:35.534 } 00:18:35.534 ] 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "subsystem": "vmd", 00:18:35.534 "config": [] 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "subsystem": "accel", 00:18:35.534 "config": [ 00:18:35.534 { 00:18:35.534 "method": "accel_set_options", 00:18:35.534 "params": { 00:18:35.534 "small_cache_size": 128, 00:18:35.534 "large_cache_size": 16, 00:18:35.534 "task_count": 2048, 00:18:35.534 "sequence_count": 2048, 00:18:35.534 "buf_count": 2048 00:18:35.534 } 00:18:35.534 } 00:18:35.534 ] 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "subsystem": "bdev", 00:18:35.534 "config": [ 00:18:35.534 { 00:18:35.534 "method": "bdev_set_options", 00:18:35.534 "params": { 00:18:35.534 "bdev_io_pool_size": 65535, 00:18:35.534 "bdev_io_cache_size": 256, 00:18:35.534 "bdev_auto_examine": true, 00:18:35.534 "iobuf_small_cache_size": 128, 00:18:35.534 "iobuf_large_cache_size": 16 00:18:35.534 } 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "method": "bdev_raid_set_options", 00:18:35.534 "params": { 00:18:35.534 "process_window_size_kb": 1024 00:18:35.534 } 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "method": "bdev_iscsi_set_options", 00:18:35.534 "params": { 00:18:35.534 "timeout_sec": 30 00:18:35.534 } 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "method": "bdev_nvme_set_options", 00:18:35.534 "params": { 00:18:35.534 "action_on_timeout": "none", 00:18:35.534 "timeout_us": 0, 00:18:35.535 "timeout_admin_us": 0, 00:18:35.535 "keep_alive_timeout_ms": 10000, 00:18:35.535 "arbitration_burst": 0, 00:18:35.535 "low_priority_weight": 0, 00:18:35.535 "medium_priority_weight": 0, 00:18:35.535 "high_priority_weight": 0, 00:18:35.535 "nvme_adminq_poll_period_us": 10000, 00:18:35.535 "nvme_ioq_poll_period_us": 0, 00:18:35.535 "io_queue_requests": 512, 00:18:35.535 "delay_cmd_submit": true, 00:18:35.535 "transport_retry_count": 4, 00:18:35.535 "bdev_retry_count": 3, 00:18:35.535 "transport_ack_timeout": 0, 00:18:35.535 "ctrlr_loss_timeout_sec": 0, 00:18:35.535 "reconnect_delay_sec": 0, 00:18:35.535 "fast_io_fail_timeout_sec": 0, 00:18:35.535 "disable_auto_failback": false, 00:18:35.535 "generate_uuids": false, 00:18:35.535 "transport_tos": 0, 00:18:35.535 "nvme_error_stat": false, 00:18:35.535 "rdma_srq_size": 0, 00:18:35.535 "io_path_stat": false, 00:18:35.535 "allow_accel_sequence": false, 00:18:35.535 "rdma_max_cq_size": 0, 00:18:35.535 "rdma_cm_event_timeout_ms": 0, 00:18:35.535 "dhchap_digests": [ 00:18:35.535 "sha256", 00:18:35.535 "sha384", 00:18:35.535 "sha512" 00:18:35.535 ], 00:18:35.535 "dhchap_dhgroups": [ 00:18:35.535 "null", 00:18:35.535 "ffdhe2048", 00:18:35.535 "ffdhe3072", 00:18:35.535 "ffdhe4096", 00:18:35.535 "ffdhe6144", 00:18:35.535 "ffdhe8192" 00:18:35.535 ] 00:18:35.535 } 00:18:35.535 }, 00:18:35.535 { 00:18:35.535 "method": "bdev_nvme_attach_controller", 00:18:35.535 "params": { 00:18:35.535 "name": "nvme0", 00:18:35.535 "trtype": "TCP", 00:18:35.535 "adrfam": "IPv4", 00:18:35.535 "traddr": "10.0.0.2", 00:18:35.535 "trsvcid": "4420", 00:18:35.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.535 "prchk_reftag": false, 00:18:35.535 "prchk_guard": false, 00:18:35.535 "ctrlr_loss_timeout_sec": 0, 00:18:35.535 "reconnect_delay_sec": 0, 00:18:35.535 "fast_io_fail_timeout_sec": 0, 00:18:35.535 "psk": "key0", 00:18:35.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.535 "hdgst": false, 00:18:35.535 "ddgst": false 00:18:35.535 } 00:18:35.535 }, 00:18:35.535 { 00:18:35.535 "method": "bdev_nvme_set_hotplug", 00:18:35.535 "params": { 00:18:35.535 "period_us": 100000, 00:18:35.535 "enable": false 00:18:35.535 } 00:18:35.535 }, 00:18:35.535 { 00:18:35.535 "method": "bdev_enable_histogram", 00:18:35.535 "params": { 00:18:35.535 "name": "nvme0n1", 00:18:35.535 "enable": true 00:18:35.535 } 00:18:35.535 }, 00:18:35.535 { 00:18:35.535 "method": "bdev_wait_for_examine" 00:18:35.535 } 00:18:35.535 ] 00:18:35.535 }, 00:18:35.535 { 00:18:35.535 "subsystem": "nbd", 00:18:35.535 "config": [] 00:18:35.535 } 00:18:35.535 ] 00:18:35.535 }' 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 4176174 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4176174 ']' 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4176174 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176174 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176174' 00:18:35.535 killing process with pid 4176174 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4176174 00:18:35.535 Received shutdown signal, test time was about 1.000000 seconds 00:18:35.535 00:18:35.535 Latency(us) 00:18:35.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.535 =================================================================================================================== 00:18:35.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.535 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4176174 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 4176030 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4176030 ']' 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4176030 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176030 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176030' 00:18:35.793 killing process with pid 4176030 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4176030 00:18:35.793 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4176030 00:18:36.051 01:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:36.051 01:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.051 01:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:36.051 "subsystems": [ 00:18:36.051 { 00:18:36.051 "subsystem": "keyring", 00:18:36.051 "config": [ 00:18:36.051 { 00:18:36.051 "method": "keyring_file_add_key", 00:18:36.051 "params": { 00:18:36.051 "name": "key0", 00:18:36.051 "path": "/tmp/tmp.PCWAyVlnnl" 00:18:36.051 } 00:18:36.051 } 00:18:36.051 ] 00:18:36.051 }, 00:18:36.051 { 00:18:36.051 "subsystem": "iobuf", 00:18:36.051 "config": [ 00:18:36.051 { 00:18:36.051 "method": "iobuf_set_options", 00:18:36.051 "params": { 00:18:36.051 "small_pool_count": 8192, 00:18:36.051 "large_pool_count": 1024, 00:18:36.051 "small_bufsize": 8192, 00:18:36.051 "large_bufsize": 135168 00:18:36.051 } 00:18:36.051 } 00:18:36.051 ] 00:18:36.051 }, 00:18:36.051 { 00:18:36.051 "subsystem": "sock", 00:18:36.051 "config": [ 00:18:36.051 { 00:18:36.051 "method": "sock_set_default_impl", 00:18:36.051 "params": { 00:18:36.051 "impl_name": "posix" 00:18:36.051 } 00:18:36.051 }, 00:18:36.051 { 00:18:36.051 "method": "sock_impl_set_options", 00:18:36.051 "params": { 00:18:36.051 "impl_name": "ssl", 00:18:36.051 "recv_buf_size": 4096, 00:18:36.051 "send_buf_size": 4096, 00:18:36.051 "enable_recv_pipe": true, 00:18:36.051 "enable_quickack": false, 00:18:36.051 "enable_placement_id": 0, 00:18:36.051 "enable_zerocopy_send_server": true, 00:18:36.051 "enable_zerocopy_send_client": false, 00:18:36.051 "zerocopy_threshold": 0, 00:18:36.051 "tls_version": 0, 00:18:36.051 "enable_ktls": false 00:18:36.051 } 00:18:36.051 }, 00:18:36.051 { 00:18:36.051 "method": "sock_impl_set_options", 00:18:36.051 "params": { 00:18:36.051 "impl_name": "posix", 00:18:36.051 "recv_buf_size": 2097152, 00:18:36.051 "send_buf_size": 2097152, 00:18:36.051 "enable_recv_pipe": true, 00:18:36.051 "enable_quickack": false, 00:18:36.051 "enable_placement_id": 0, 00:18:36.051 "enable_zerocopy_send_server": true, 00:18:36.051 "enable_zerocopy_send_client": false, 00:18:36.051 "zerocopy_threshold": 0, 00:18:36.051 "tls_version": 0, 00:18:36.051 "enable_ktls": false 00:18:36.051 } 00:18:36.051 } 00:18:36.051 ] 00:18:36.051 }, 00:18:36.051 { 00:18:36.051 "subsystem": "vmd", 00:18:36.051 "config": [] 00:18:36.051 }, 00:18:36.051 { 00:18:36.051 "subsystem": "accel", 00:18:36.051 "config": [ 00:18:36.051 { 00:18:36.051 "method": "accel_set_options", 00:18:36.051 "params": { 00:18:36.051 "small_cache_size": 128, 00:18:36.051 "large_cache_size": 16, 00:18:36.051 "task_count": 2048, 00:18:36.051 "sequence_count": 2048, 00:18:36.051 "buf_count": 2048 00:18:36.051 } 00:18:36.051 } 00:18:36.051 ] 00:18:36.051 }, 00:18:36.051 { 00:18:36.051 "subsystem": "bdev", 00:18:36.051 "config": [ 00:18:36.051 { 00:18:36.051 "method": "bdev_set_options", 00:18:36.051 "params": { 00:18:36.051 "bdev_io_pool_size": 65535, 00:18:36.051 "bdev_io_cache_size": 256, 00:18:36.051 "bdev_auto_examine": true, 00:18:36.051 "iobuf_small_cache_size": 128, 00:18:36.051 "iobuf_large_cache_size": 16 00:18:36.051 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "bdev_raid_set_options", 00:18:36.052 "params": { 00:18:36.052 "process_window_size_kb": 1024 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "bdev_iscsi_set_options", 00:18:36.052 "params": { 00:18:36.052 "timeout_sec": 30 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "bdev_nvme_set_options", 00:18:36.052 "params": { 00:18:36.052 "action_on_timeout": "none", 00:18:36.052 "timeout_us": 0, 00:18:36.052 "timeout_admin_us": 0, 00:18:36.052 "keep_alive_timeout_ms": 10000, 00:18:36.052 "arbitration_burst": 0, 00:18:36.052 "low_priority_weight": 0, 00:18:36.052 "medium_priority_weight": 0, 00:18:36.052 "high_priority_weight": 0, 00:18:36.052 "nvme_adminq_poll_period_us": 10000, 00:18:36.052 "nvme_ioq_poll_period_us": 0, 00:18:36.052 "io_queue_requests": 0, 00:18:36.052 "delay_cmd_submit": true, 00:18:36.052 "transport_retry_count": 4, 00:18:36.052 "bdev_retry_count": 3, 00:18:36.052 "transport_ack_timeout": 0, 00:18:36.052 "ctrlr_loss_timeout_sec": 0, 00:18:36.052 "reconnect_delay_sec": 0, 00:18:36.052 "fast_io_fail_timeout_sec": 0, 00:18:36.052 "disable_auto_failback": false, 00:18:36.052 "generate_uuids": false, 00:18:36.052 "transport_tos": 0, 00:18:36.052 "nvme_error_stat": false, 00:18:36.052 "rdma_srq_size": 0, 00:18:36.052 "io_path_stat": false, 00:18:36.052 "allow_accel_sequence": false, 00:18:36.052 "rdma_max_cq_size": 0, 00:18:36.052 "rdma_cm_event_timeout_ms": 0, 00:18:36.052 "dhchap_digests": [ 00:18:36.052 "sha256", 00:18:36.052 "sha384", 00:18:36.052 "sha512" 00:18:36.052 ], 00:18:36.052 "dhchap_dhgroups": [ 00:18:36.052 "null", 00:18:36.052 "ffdhe2048", 00:18:36.052 "ffdhe3072", 00:18:36.052 "ffdhe4096", 00:18:36.052 "ffdhe6144", 00:18:36.052 "ffdhe8192" 00:18:36.052 ] 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "bdev_nvme_set_hotplug", 00:18:36.052 "params": { 00:18:36.052 "period_us": 100000, 00:18:36.052 "enable": false 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "bdev_malloc_create", 00:18:36.052 "params": { 00:18:36.052 "name": "malloc0", 00:18:36.052 "num_blocks": 8192, 00:18:36.052 "block_size": 4096, 00:18:36.052 "physical_block_size": 4096, 00:18:36.052 "uuid": "7900c7c5-8a91-4070-b75c-043e0cc662f9", 00:18:36.052 "optimal_io_boundary": 0 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "bdev_wait_for_examine" 00:18:36.052 } 00:18:36.052 ] 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "subsystem": "nbd", 00:18:36.052 "config": [] 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "subsystem": "scheduler", 00:18:36.052 "config": [ 00:18:36.052 { 00:18:36.052 "method": "framework_set_scheduler", 00:18:36.052 "params": { 00:18:36.052 "name": "static" 00:18:36.052 } 00:18:36.052 } 00:18:36.052 ] 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "subsystem": "nvmf", 00:18:36.052 "config": [ 00:18:36.052 { 00:18:36.052 "method": "nvmf_set_config", 00:18:36.052 "params": { 00:18:36.052 "discovery_filter": "match_any", 00:18:36.052 "admin_cmd_passthru": { 00:18:36.052 "identify_ctrlr": false 00:18:36.052 } 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "nvmf_set_max_subsystems", 00:18:36.052 "params": { 00:18:36.052 "max_subsystems": 1024 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "nvmf_set_crdt", 00:18:36.052 "params": { 00:18:36.052 "crdt1": 0, 00:18:36.052 "crdt2": 0, 00:18:36.052 "crdt3": 0 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "nvmf_create_transport", 00:18:36.052 "params": { 00:18:36.052 "trtype": "TCP", 00:18:36.052 "max_queue_depth": 128, 00:18:36.052 "max_io_qpairs_per_ctrlr": 127, 00:18:36.052 "in_capsule_data_size": 4096, 00:18:36.052 "max_io_size": 131072, 00:18:36.052 "io_unit_size": 131072, 00:18:36.052 "max_aq_depth": 128, 00:18:36.052 "num_shared_buffers": 511, 00:18:36.052 "buf_cache_size": 4294967295, 00:18:36.052 "dif_insert_or_strip": false, 00:18:36.052 "zcopy": false, 00:18:36.052 "c2h_success": false, 00:18:36.052 "sock_priority": 0, 00:18:36.052 "abort_timeout_sec": 1, 00:18:36.052 "ack_timeout": 0, 00:18:36.052 "data_wr_pool_size": 0 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "nvmf_create_subsystem", 00:18:36.052 "params": { 00:18:36.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.052 "allow_any_host": false, 00:18:36.052 "serial_number": "00000000000000000000", 00:18:36.052 "model_number": "SPDK bdev Controller", 00:18:36.052 "max_namespaces": 32, 00:18:36.052 "min_cntlid": 1, 00:18:36.052 "max_cntlid": 65519, 00:18:36.052 "ana_reporting": false 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "nvmf_subsystem_add_host", 00:18:36.052 "params": { 00:18:36.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.052 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.052 "psk": "key0" 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "nvmf_subsystem_add_ns", 00:18:36.052 "params": { 00:18:36.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.052 "namespace": { 00:18:36.052 "nsid": 1, 00:18:36.052 "bdev_name": "malloc0", 00:18:36.052 "nguid": "7900C7C58A914070B75C043E0CC662F9", 00:18:36.052 "uuid": "7900c7c5-8a91-4070-b75c-043e0cc662f9", 00:18:36.052 "no_auto_visible": false 00:18:36.052 } 00:18:36.052 } 00:18:36.052 }, 00:18:36.052 { 00:18:36.052 "method": "nvmf_subsystem_add_listener", 00:18:36.052 "params": { 00:18:36.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.052 "listen_address": { 00:18:36.052 "trtype": "TCP", 00:18:36.052 "adrfam": "IPv4", 00:18:36.052 "traddr": "10.0.0.2", 00:18:36.052 "trsvcid": "4420" 00:18:36.052 }, 00:18:36.052 "secure_channel": false, 00:18:36.052 "sock_impl": "ssl" 00:18:36.052 } 00:18:36.052 } 00:18:36.052 ] 00:18:36.052 } 00:18:36.052 ] 00:18:36.052 }' 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4176480 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4176480 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4176480 ']' 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.052 01:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.052 [2024-07-16 01:11:52.015396] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:36.052 [2024-07-16 01:11:52.015473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.311 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.311 [2024-07-16 01:11:52.083099] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.311 [2024-07-16 01:11:52.188183] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.311 [2024-07-16 01:11:52.188238] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.311 [2024-07-16 01:11:52.188262] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.311 [2024-07-16 01:11:52.188273] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.311 [2024-07-16 01:11:52.188283] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.311 [2024-07-16 01:11:52.188353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.569 [2024-07-16 01:11:52.424617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.569 [2024-07-16 01:11:52.456650] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.569 [2024-07-16 01:11:52.465122] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=4176618 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 4176618 /var/tmp/bdevperf.sock 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4176618 ']' 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.135 01:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:37.135 "subsystems": [ 00:18:37.135 { 00:18:37.135 "subsystem": "keyring", 00:18:37.135 "config": [ 00:18:37.135 { 00:18:37.135 "method": "keyring_file_add_key", 00:18:37.135 "params": { 00:18:37.135 "name": "key0", 00:18:37.135 "path": "/tmp/tmp.PCWAyVlnnl" 00:18:37.135 } 00:18:37.135 } 00:18:37.135 ] 00:18:37.135 }, 00:18:37.135 { 00:18:37.135 "subsystem": "iobuf", 00:18:37.135 "config": [ 00:18:37.135 { 00:18:37.135 "method": "iobuf_set_options", 00:18:37.135 "params": { 00:18:37.135 "small_pool_count": 8192, 00:18:37.135 "large_pool_count": 1024, 00:18:37.135 "small_bufsize": 8192, 00:18:37.136 "large_bufsize": 135168 00:18:37.136 } 00:18:37.136 } 00:18:37.136 ] 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "subsystem": "sock", 00:18:37.136 "config": [ 00:18:37.136 { 00:18:37.136 "method": "sock_set_default_impl", 00:18:37.136 "params": { 00:18:37.136 "impl_name": "posix" 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "sock_impl_set_options", 00:18:37.136 "params": { 00:18:37.136 "impl_name": "ssl", 00:18:37.136 "recv_buf_size": 4096, 00:18:37.136 "send_buf_size": 4096, 00:18:37.136 "enable_recv_pipe": true, 00:18:37.136 "enable_quickack": false, 00:18:37.136 "enable_placement_id": 0, 00:18:37.136 "enable_zerocopy_send_server": true, 00:18:37.136 "enable_zerocopy_send_client": false, 00:18:37.136 "zerocopy_threshold": 0, 00:18:37.136 "tls_version": 0, 00:18:37.136 "enable_ktls": false 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "sock_impl_set_options", 00:18:37.136 "params": { 00:18:37.136 "impl_name": "posix", 00:18:37.136 "recv_buf_size": 2097152, 00:18:37.136 "send_buf_size": 2097152, 00:18:37.136 "enable_recv_pipe": true, 00:18:37.136 "enable_quickack": false, 00:18:37.136 "enable_placement_id": 0, 00:18:37.136 "enable_zerocopy_send_server": true, 00:18:37.136 "enable_zerocopy_send_client": false, 00:18:37.136 "zerocopy_threshold": 0, 00:18:37.136 "tls_version": 0, 00:18:37.136 "enable_ktls": false 00:18:37.136 } 00:18:37.136 } 00:18:37.136 ] 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "subsystem": "vmd", 00:18:37.136 "config": [] 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "subsystem": "accel", 00:18:37.136 "config": [ 00:18:37.136 { 00:18:37.136 "method": "accel_set_options", 00:18:37.136 "params": { 00:18:37.136 "small_cache_size": 128, 00:18:37.136 "large_cache_size": 16, 00:18:37.136 "task_count": 2048, 00:18:37.136 "sequence_count": 2048, 00:18:37.136 "buf_count": 2048 00:18:37.136 } 00:18:37.136 } 00:18:37.136 ] 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "subsystem": "bdev", 00:18:37.136 "config": [ 00:18:37.136 { 00:18:37.136 "method": "bdev_set_options", 00:18:37.136 "params": { 00:18:37.136 "bdev_io_pool_size": 65535, 00:18:37.136 "bdev_io_cache_size": 256, 00:18:37.136 "bdev_auto_examine": true, 00:18:37.136 "iobuf_small_cache_size": 128, 00:18:37.136 "iobuf_large_cache_size": 16 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "bdev_raid_set_options", 00:18:37.136 "params": { 00:18:37.136 "process_window_size_kb": 1024 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "bdev_iscsi_set_options", 00:18:37.136 "params": { 00:18:37.136 "timeout_sec": 30 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "bdev_nvme_set_options", 00:18:37.136 "params": { 00:18:37.136 "action_on_timeout": "none", 00:18:37.136 "timeout_us": 0, 00:18:37.136 "timeout_admin_us": 0, 00:18:37.136 "keep_alive_timeout_ms": 10000, 00:18:37.136 "arbitration_burst": 0, 00:18:37.136 "low_priority_weight": 0, 00:18:37.136 "medium_priority_weight": 0, 00:18:37.136 "high_priority_weight": 0, 00:18:37.136 "nvme_adminq_poll_period_us": 10000, 00:18:37.136 "nvme_ioq_poll_period_us": 0, 00:18:37.136 "io_queue_requests": 512, 00:18:37.136 "delay_cmd_submit": true, 00:18:37.136 "transport_retry_count": 4, 00:18:37.136 "bdev_retry_count": 3, 00:18:37.136 "transport_ack_timeout": 0, 00:18:37.136 "ctrlr_loss_timeout_sec": 0, 00:18:37.136 "reconnect_delay_sec": 0, 00:18:37.136 "fast_io_fail_timeout_sec": 0, 00:18:37.136 "disable_auto_failback": false, 00:18:37.136 "generate_uuids": false, 00:18:37.136 "transport_tos": 0, 00:18:37.136 "nvme_error_stat": false, 00:18:37.136 "rdma_srq_size": 0, 00:18:37.136 "io_path_stat": false, 00:18:37.136 "allow_accel_sequence": false, 00:18:37.136 "rdma_max_cq_size": 0, 00:18:37.136 "rdma_cm_event_timeout_ms": 0, 00:18:37.136 "dhchap_digests": [ 00:18:37.136 "sha256", 00:18:37.136 "sha384", 00:18:37.136 "sha512" 00:18:37.136 ], 00:18:37.136 "dhchap_dhgroups": [ 00:18:37.136 "null", 00:18:37.136 "ffdhe2048", 00:18:37.136 "ffdhe3072", 00:18:37.136 "ffdhe4096", 00:18:37.136 "ffdhe6144", 00:18:37.136 "ffdhe8192" 00:18:37.136 ] 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "bdev_nvme_attach_controller", 00:18:37.136 "params": { 00:18:37.136 "name": "nvme0", 00:18:37.136 "trtype": "TCP", 00:18:37.136 "adrfam": "IPv4", 00:18:37.136 "traddr": "10.0.0.2", 00:18:37.136 "trsvcid": "4420", 00:18:37.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.136 "prchk_reftag": false, 00:18:37.136 "prchk_guard": false, 00:18:37.136 "ctrlr_loss_timeout_sec": 0, 00:18:37.136 "reconnect_delay_sec": 0, 00:18:37.136 "fast_io_fail_timeout_sec": 0, 00:18:37.136 "psk": "key0", 00:18:37.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.136 "hdgst": false, 00:18:37.136 "ddgst": false 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "bdev_nvme_set_hotplug", 00:18:37.136 "params": { 00:18:37.136 "period_us": 100000, 00:18:37.136 "enable": false 00:18:37.136 } 00:18:37.136 }, 00:18:37.136 { 00:18:37.136 "method": "bdev_enable_histogram", 00:18:37.136 "params": { 00:18:37.136 "name": "nvme0n1", 00:18:37.137 "enable": true 00:18:37.137 } 00:18:37.137 }, 00:18:37.137 { 00:18:37.137 "method": "bdev_wait_for_examine" 00:18:37.137 } 00:18:37.137 ] 00:18:37.137 }, 00:18:37.137 { 00:18:37.137 "subsystem": "nbd", 00:18:37.137 "config": [] 00:18:37.137 } 00:18:37.137 ] 00:18:37.137 }' 00:18:37.137 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.137 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.137 01:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.137 [2024-07-16 01:11:53.074724] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:37.137 [2024-07-16 01:11:53.074813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176618 ] 00:18:37.137 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.395 [2024-07-16 01:11:53.136276] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.395 [2024-07-16 01:11:53.243750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.653 [2024-07-16 01:11:53.421690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.219 01:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.219 01:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.219 01:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:38.219 01:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:38.477 01:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.477 01:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.477 Running I/O for 1 seconds... 00:18:39.848 00:18:39.848 Latency(us) 00:18:39.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.848 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.848 Verification LBA range: start 0x0 length 0x2000 00:18:39.848 nvme0n1 : 1.05 2618.87 10.23 0.00 0.00 48172.93 7233.23 64079.64 00:18:39.848 =================================================================================================================== 00:18:39.848 Total : 2618.87 10.23 0.00 0.00 48172.93 7233.23 64079.64 00:18:39.848 0 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:39.848 nvmf_trace.0 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4176618 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4176618 ']' 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4176618 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176618 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176618' 00:18:39.848 killing process with pid 4176618 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4176618 00:18:39.848 Received shutdown signal, test time was about 1.000000 seconds 00:18:39.848 00:18:39.848 Latency(us) 00:18:39.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.848 =================================================================================================================== 00:18:39.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4176618 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.848 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.848 rmmod nvme_tcp 00:18:40.106 rmmod nvme_fabrics 00:18:40.106 rmmod nvme_keyring 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4176480 ']' 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4176480 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4176480 ']' 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4176480 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176480 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176480' 00:18:40.106 killing process with pid 4176480 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4176480 00:18:40.106 01:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4176480 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.366 01:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.274 01:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.274 01:11:58 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.jLrWVStUll /tmp/tmp.5fQegtYqjh /tmp/tmp.PCWAyVlnnl 00:18:42.274 00:18:42.274 real 1m20.058s 00:18:42.274 user 2m3.987s 00:18:42.274 sys 0m27.688s 00:18:42.274 01:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.274 01:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.274 ************************************ 00:18:42.274 END TEST nvmf_tls 00:18:42.274 ************************************ 00:18:42.274 01:11:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:42.274 01:11:58 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:42.274 01:11:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:42.274 01:11:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.274 01:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.274 ************************************ 00:18:42.274 START TEST nvmf_fips 00:18:42.274 ************************************ 00:18:42.274 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:42.534 * Looking for test storage... 00:18:42.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:42.534 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:42.535 Error setting digest 00:18:42.535 00C24C97047F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:42.535 00C24C97047F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.535 01:11:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.066 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:45.067 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:45.067 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:45.067 Found net devices under 0000:09:00.0: cvl_0_0 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:45.067 Found net devices under 0000:09:00.1: cvl_0_1 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:18:45.067 00:18:45.067 --- 10.0.0.2 ping statistics --- 00:18:45.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.067 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:18:45.067 00:18:45.067 --- 10.0.0.1 ping statistics --- 00:18:45.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.067 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4179020 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4179020 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4179020 ']' 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.067 01:12:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.067 [2024-07-16 01:12:00.767929] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:45.068 [2024-07-16 01:12:00.768040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.068 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.068 [2024-07-16 01:12:00.832385] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.068 [2024-07-16 01:12:00.940084] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.068 [2024-07-16 01:12:00.940138] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.068 [2024-07-16 01:12:00.940152] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.068 [2024-07-16 01:12:00.940163] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.068 [2024-07-16 01:12:00.940172] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.068 [2024-07-16 01:12:00.940203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:46.017 01:12:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.017 [2024-07-16 01:12:01.956242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.017 [2024-07-16 01:12:01.972230] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.017 [2024-07-16 01:12:01.972446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.296 [2024-07-16 01:12:02.003339] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:46.296 malloc0 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4179248 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4179248 /var/tmp/bdevperf.sock 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4179248 ']' 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.296 01:12:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.297 01:12:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.297 01:12:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.297 [2024-07-16 01:12:02.099902] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:18:46.297 [2024-07-16 01:12:02.100013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179248 ] 00:18:46.297 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.297 [2024-07-16 01:12:02.157912] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.297 [2024-07-16 01:12:02.269423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.229 01:12:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.229 01:12:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:47.229 01:12:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:47.486 [2024-07-16 01:12:03.265629] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.486 [2024-07-16 01:12:03.265752] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:47.486 TLSTESTn1 00:18:47.486 01:12:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.486 Running I/O for 10 seconds... 00:18:59.684 00:18:59.684 Latency(us) 00:18:59.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.684 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:59.684 Verification LBA range: start 0x0 length 0x2000 00:18:59.684 TLSTESTn1 : 10.02 2493.62 9.74 0.00 0.00 51228.01 9903.22 46215.02 00:18:59.684 =================================================================================================================== 00:18:59.684 Total : 2493.62 9.74 0.00 0.00 51228.01 9903.22 46215.02 00:18:59.684 0 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:59.684 nvmf_trace.0 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4179248 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4179248 ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4179248 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4179248 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4179248' 00:18:59.684 killing process with pid 4179248 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4179248 00:18:59.684 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.684 00:18:59.684 Latency(us) 00:18:59.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.684 =================================================================================================================== 00:18:59.684 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.684 [2024-07-16 01:12:13.623671] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4179248 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.684 rmmod nvme_tcp 00:18:59.684 rmmod nvme_fabrics 00:18:59.684 rmmod nvme_keyring 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4179020 ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4179020 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4179020 ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4179020 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4179020 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4179020' 00:18:59.684 killing process with pid 4179020 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4179020 00:18:59.684 [2024-07-16 01:12:13.971964] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:59.684 01:12:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4179020 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.684 01:12:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.620 01:12:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.620 01:12:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:00.620 00:19:00.620 real 0m18.034s 00:19:00.620 user 0m18.306s 00:19:00.620 sys 0m6.926s 00:19:00.620 01:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:00.620 01:12:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:00.620 ************************************ 00:19:00.620 END TEST nvmf_fips 00:19:00.620 ************************************ 00:19:00.620 01:12:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:00.620 01:12:16 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:00.620 01:12:16 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:00.620 01:12:16 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:00.620 01:12:16 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:00.620 01:12:16 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.620 01:12:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:02.521 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:02.521 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.521 01:12:18 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:02.522 Found net devices under 0000:09:00.0: cvl_0_0 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:02.522 Found net devices under 0000:09:00.1: cvl_0_1 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:02.522 01:12:18 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:02.522 01:12:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:02.522 01:12:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.522 01:12:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.522 ************************************ 00:19:02.522 START TEST nvmf_perf_adq 00:19:02.522 ************************************ 00:19:02.522 01:12:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:02.522 * Looking for test storage... 00:19:02.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.522 01:12:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.780 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.781 01:12:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:04.683 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:04.683 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:04.683 Found net devices under 0000:09:00.0: cvl_0_0 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:04.683 Found net devices under 0000:09:00.1: cvl_0_1 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:04.683 01:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:05.251 01:12:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:07.177 01:12:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:12.469 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:12.469 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:12.469 Found net devices under 0000:09:00.0: cvl_0_0 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:12.469 Found net devices under 0000:09:00.1: cvl_0_1 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:12.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:19:12.469 00:19:12.469 --- 10.0.0.2 ping statistics --- 00:19:12.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.469 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:19:12.469 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:19:12.469 00:19:12.469 --- 10.0.0.1 ping statistics --- 00:19:12.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.469 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4185624 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4185624 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4185624 ']' 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.470 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.470 [2024-07-16 01:12:28.267684] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:19:12.470 [2024-07-16 01:12:28.267757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.470 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.470 [2024-07-16 01:12:28.332454] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.470 [2024-07-16 01:12:28.441611] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.470 [2024-07-16 01:12:28.441666] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.470 [2024-07-16 01:12:28.441687] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.470 [2024-07-16 01:12:28.441698] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.470 [2024-07-16 01:12:28.441708] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.470 [2024-07-16 01:12:28.441808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.470 [2024-07-16 01:12:28.441867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.470 [2024-07-16 01:12:28.441944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.470 [2024-07-16 01:12:28.441946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 [2024-07-16 01:12:28.646885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 Malloc1 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.729 [2024-07-16 01:12:28.700086] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4185773 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:12.729 01:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:12.987 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:14.887 "tick_rate": 2700000000, 00:19:14.887 "poll_groups": [ 00:19:14.887 { 00:19:14.887 "name": "nvmf_tgt_poll_group_000", 00:19:14.887 "admin_qpairs": 1, 00:19:14.887 "io_qpairs": 1, 00:19:14.887 "current_admin_qpairs": 1, 00:19:14.887 "current_io_qpairs": 1, 00:19:14.887 "pending_bdev_io": 0, 00:19:14.887 "completed_nvme_io": 20654, 00:19:14.887 "transports": [ 00:19:14.887 { 00:19:14.887 "trtype": "TCP" 00:19:14.887 } 00:19:14.887 ] 00:19:14.887 }, 00:19:14.887 { 00:19:14.887 "name": "nvmf_tgt_poll_group_001", 00:19:14.887 "admin_qpairs": 0, 00:19:14.887 "io_qpairs": 1, 00:19:14.887 "current_admin_qpairs": 0, 00:19:14.887 "current_io_qpairs": 1, 00:19:14.887 "pending_bdev_io": 0, 00:19:14.887 "completed_nvme_io": 19534, 00:19:14.887 "transports": [ 00:19:14.887 { 00:19:14.887 "trtype": "TCP" 00:19:14.887 } 00:19:14.887 ] 00:19:14.887 }, 00:19:14.887 { 00:19:14.887 "name": "nvmf_tgt_poll_group_002", 00:19:14.887 "admin_qpairs": 0, 00:19:14.887 "io_qpairs": 1, 00:19:14.887 "current_admin_qpairs": 0, 00:19:14.887 "current_io_qpairs": 1, 00:19:14.887 "pending_bdev_io": 0, 00:19:14.887 "completed_nvme_io": 18518, 00:19:14.887 "transports": [ 00:19:14.887 { 00:19:14.887 "trtype": "TCP" 00:19:14.887 } 00:19:14.887 ] 00:19:14.887 }, 00:19:14.887 { 00:19:14.887 "name": "nvmf_tgt_poll_group_003", 00:19:14.887 "admin_qpairs": 0, 00:19:14.887 "io_qpairs": 1, 00:19:14.887 "current_admin_qpairs": 0, 00:19:14.887 "current_io_qpairs": 1, 00:19:14.887 "pending_bdev_io": 0, 00:19:14.887 "completed_nvme_io": 20873, 00:19:14.887 "transports": [ 00:19:14.887 { 00:19:14.887 "trtype": "TCP" 00:19:14.887 } 00:19:14.887 ] 00:19:14.887 } 00:19:14.887 ] 00:19:14.887 }' 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:14.887 01:12:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4185773 00:19:23.060 Initializing NVMe Controllers 00:19:23.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:23.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:23.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:23.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:23.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:23.060 Initialization complete. Launching workers. 00:19:23.060 ======================================================== 00:19:23.060 Latency(us) 00:19:23.060 Device Information : IOPS MiB/s Average min max 00:19:23.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9693.50 37.87 6601.85 2829.07 9666.26 00:19:23.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10348.20 40.42 6184.53 2244.15 10671.88 00:19:23.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10940.70 42.74 5849.24 2514.83 8605.64 00:19:23.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10858.40 42.42 5894.01 2851.47 8549.02 00:19:23.060 ======================================================== 00:19:23.060 Total : 41840.80 163.44 6118.15 2244.15 10671.88 00:19:23.060 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.060 rmmod nvme_tcp 00:19:23.060 rmmod nvme_fabrics 00:19:23.060 rmmod nvme_keyring 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4185624 ']' 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4185624 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4185624 ']' 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4185624 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4185624 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4185624' 00:19:23.060 killing process with pid 4185624 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4185624 00:19:23.060 01:12:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4185624 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.319 01:12:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.850 01:12:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.850 01:12:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:25.850 01:12:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:25.850 01:12:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:27.784 01:12:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:33.060 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:33.060 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.060 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:33.061 Found net devices under 0000:09:00.0: cvl_0_0 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:33.061 Found net devices under 0000:09:00.1: cvl_0_1 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:33.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:19:33.061 00:19:33.061 --- 10.0.0.2 ping statistics --- 00:19:33.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.061 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:33.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:19:33.061 00:19:33.061 --- 10.0.0.1 ping statistics --- 00:19:33.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.061 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:33.061 net.core.busy_poll = 1 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:33.061 net.core.busy_read = 1 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:33.061 01:12:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4188391 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4188391 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4188391 ']' 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.061 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 [2024-07-16 01:12:49.086638] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:19:33.319 [2024-07-16 01:12:49.086737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.319 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.319 [2024-07-16 01:12:49.154146] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.319 [2024-07-16 01:12:49.260712] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.319 [2024-07-16 01:12:49.260767] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.319 [2024-07-16 01:12:49.260796] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.320 [2024-07-16 01:12:49.260807] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.320 [2024-07-16 01:12:49.260817] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.320 [2024-07-16 01:12:49.260869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.320 [2024-07-16 01:12:49.260929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.320 [2024-07-16 01:12:49.260951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.320 [2024-07-16 01:12:49.260960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.320 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.320 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:33.320 01:12:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.320 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:33.320 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 [2024-07-16 01:12:49.485850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 Malloc1 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.578 [2024-07-16 01:12:49.539029] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4188424 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:33.578 01:12:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:33.836 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:35.737 "tick_rate": 2700000000, 00:19:35.737 "poll_groups": [ 00:19:35.737 { 00:19:35.737 "name": "nvmf_tgt_poll_group_000", 00:19:35.737 "admin_qpairs": 1, 00:19:35.737 "io_qpairs": 0, 00:19:35.737 "current_admin_qpairs": 1, 00:19:35.737 "current_io_qpairs": 0, 00:19:35.737 "pending_bdev_io": 0, 00:19:35.737 "completed_nvme_io": 0, 00:19:35.737 "transports": [ 00:19:35.737 { 00:19:35.737 "trtype": "TCP" 00:19:35.737 } 00:19:35.737 ] 00:19:35.737 }, 00:19:35.737 { 00:19:35.737 "name": "nvmf_tgt_poll_group_001", 00:19:35.737 "admin_qpairs": 0, 00:19:35.737 "io_qpairs": 4, 00:19:35.737 "current_admin_qpairs": 0, 00:19:35.737 "current_io_qpairs": 4, 00:19:35.737 "pending_bdev_io": 0, 00:19:35.737 "completed_nvme_io": 33306, 00:19:35.737 "transports": [ 00:19:35.737 { 00:19:35.737 "trtype": "TCP" 00:19:35.737 } 00:19:35.737 ] 00:19:35.737 }, 00:19:35.737 { 00:19:35.737 "name": "nvmf_tgt_poll_group_002", 00:19:35.737 "admin_qpairs": 0, 00:19:35.737 "io_qpairs": 0, 00:19:35.737 "current_admin_qpairs": 0, 00:19:35.737 "current_io_qpairs": 0, 00:19:35.737 "pending_bdev_io": 0, 00:19:35.737 "completed_nvme_io": 0, 00:19:35.737 "transports": [ 00:19:35.737 { 00:19:35.737 "trtype": "TCP" 00:19:35.737 } 00:19:35.737 ] 00:19:35.737 }, 00:19:35.737 { 00:19:35.737 "name": "nvmf_tgt_poll_group_003", 00:19:35.737 "admin_qpairs": 0, 00:19:35.737 "io_qpairs": 0, 00:19:35.737 "current_admin_qpairs": 0, 00:19:35.737 "current_io_qpairs": 0, 00:19:35.737 "pending_bdev_io": 0, 00:19:35.737 "completed_nvme_io": 0, 00:19:35.737 "transports": [ 00:19:35.737 { 00:19:35.737 "trtype": "TCP" 00:19:35.737 } 00:19:35.737 ] 00:19:35.737 } 00:19:35.737 ] 00:19:35.737 }' 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:19:35.737 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4188424 00:19:43.840 Initializing NVMe Controllers 00:19:43.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:43.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:43.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:43.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:43.840 Initialization complete. Launching workers. 00:19:43.840 ======================================================== 00:19:43.840 Latency(us) 00:19:43.840 Device Information : IOPS MiB/s Average min max 00:19:43.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4338.86 16.95 14751.51 1574.96 62208.20 00:19:43.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4677.05 18.27 13730.02 1914.31 61283.67 00:19:43.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4537.55 17.72 14110.99 1878.31 60400.44 00:19:43.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4027.66 15.73 15924.35 1696.47 62569.69 00:19:43.840 ======================================================== 00:19:43.840 Total : 17581.12 68.68 14583.14 1574.96 62569.69 00:19:43.840 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.840 rmmod nvme_tcp 00:19:43.840 rmmod nvme_fabrics 00:19:43.840 rmmod nvme_keyring 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4188391 ']' 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4188391 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4188391 ']' 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4188391 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4188391 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4188391' 00:19:43.840 killing process with pid 4188391 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4188391 00:19:43.840 01:12:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4188391 00:19:44.405 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.405 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.405 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.405 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.406 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.406 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.406 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.406 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.692 01:13:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.692 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:47.692 00:19:47.692 real 0m44.722s 00:19:47.692 user 2m36.166s 00:19:47.692 sys 0m11.381s 00:19:47.692 01:13:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.692 01:13:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.692 ************************************ 00:19:47.692 END TEST nvmf_perf_adq 00:19:47.692 ************************************ 00:19:47.692 01:13:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:47.692 01:13:03 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:47.692 01:13:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:47.692 01:13:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.692 01:13:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:47.692 ************************************ 00:19:47.692 START TEST nvmf_shutdown 00:19:47.692 ************************************ 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:47.692 * Looking for test storage... 00:19:47.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.692 01:13:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:47.693 ************************************ 00:19:47.693 START TEST nvmf_shutdown_tc1 00:19:47.693 ************************************ 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.693 01:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:49.594 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:49.594 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:49.594 Found net devices under 0000:09:00.0: cvl_0_0 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:49.594 Found net devices under 0000:09:00.1: cvl_0_1 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:19:49.594 00:19:49.594 --- 10.0.0.2 ping statistics --- 00:19:49.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.594 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:19:49.594 00:19:49.594 --- 10.0.0.1 ping statistics --- 00:19:49.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.594 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.594 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4191712 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4191712 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4191712 ']' 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.595 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:49.852 [2024-07-16 01:13:05.607894] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:19:49.852 [2024-07-16 01:13:05.608028] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.852 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.852 [2024-07-16 01:13:05.673634] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.852 [2024-07-16 01:13:05.785335] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.852 [2024-07-16 01:13:05.785403] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.852 [2024-07-16 01:13:05.785431] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.852 [2024-07-16 01:13:05.785442] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.852 [2024-07-16 01:13:05.785452] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.852 [2024-07-16 01:13:05.785533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.852 [2024-07-16 01:13:05.785600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.852 [2024-07-16 01:13:05.785666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:49.852 [2024-07-16 01:13:05.785668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.110 [2024-07-16 01:13:05.947646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.110 01:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.110 Malloc1 00:19:50.110 [2024-07-16 01:13:06.026969] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.110 Malloc2 00:19:50.110 Malloc3 00:19:50.368 Malloc4 00:19:50.368 Malloc5 00:19:50.368 Malloc6 00:19:50.368 Malloc7 00:19:50.368 Malloc8 00:19:50.626 Malloc9 00:19:50.627 Malloc10 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4191892 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4191892 /var/tmp/bdevperf.sock 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4191892 ']' 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.627 "method": "bdev_nvme_attach_controller" 00:19:50.627 } 00:19:50.627 EOF 00:19:50.627 )") 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.627 "method": "bdev_nvme_attach_controller" 00:19:50.627 } 00:19:50.627 EOF 00:19:50.627 )") 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.627 "method": "bdev_nvme_attach_controller" 00:19:50.627 } 00:19:50.627 EOF 00:19:50.627 )") 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.627 "method": "bdev_nvme_attach_controller" 00:19:50.627 } 00:19:50.627 EOF 00:19:50.627 )") 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.627 "method": "bdev_nvme_attach_controller" 00:19:50.627 } 00:19:50.627 EOF 00:19:50.627 )") 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.627 "method": "bdev_nvme_attach_controller" 00:19:50.627 } 00:19:50.627 EOF 00:19:50.627 )") 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.627 "method": "bdev_nvme_attach_controller" 00:19:50.627 } 00:19:50.627 EOF 00:19:50.627 )") 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.627 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.627 { 00:19:50.627 "params": { 00:19:50.627 "name": "Nvme$subsystem", 00:19:50.627 "trtype": "$TEST_TRANSPORT", 00:19:50.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.627 "adrfam": "ipv4", 00:19:50.627 "trsvcid": "$NVMF_PORT", 00:19:50.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.627 "hdgst": ${hdgst:-false}, 00:19:50.627 "ddgst": ${ddgst:-false} 00:19:50.627 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 } 00:19:50.628 EOF 00:19:50.628 )") 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.628 { 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme$subsystem", 00:19:50.628 "trtype": "$TEST_TRANSPORT", 00:19:50.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "$NVMF_PORT", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.628 "hdgst": ${hdgst:-false}, 00:19:50.628 "ddgst": ${ddgst:-false} 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 } 00:19:50.628 EOF 00:19:50.628 )") 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.628 { 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme$subsystem", 00:19:50.628 "trtype": "$TEST_TRANSPORT", 00:19:50.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "$NVMF_PORT", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.628 "hdgst": ${hdgst:-false}, 00:19:50.628 "ddgst": ${ddgst:-false} 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 } 00:19:50.628 EOF 00:19:50.628 )") 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:50.628 01:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme1", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme2", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme3", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme4", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme5", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme6", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme7", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme8", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme9", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:50.628 "hdgst": false, 00:19:50.628 "ddgst": false 00:19:50.628 }, 00:19:50.628 "method": "bdev_nvme_attach_controller" 00:19:50.628 },{ 00:19:50.628 "params": { 00:19:50.628 "name": "Nvme10", 00:19:50.628 "trtype": "tcp", 00:19:50.628 "traddr": "10.0.0.2", 00:19:50.628 "adrfam": "ipv4", 00:19:50.628 "trsvcid": "4420", 00:19:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:50.628 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:50.628 "hdgst": false, 00:19:50.629 "ddgst": false 00:19:50.629 }, 00:19:50.629 "method": "bdev_nvme_attach_controller" 00:19:50.629 }' 00:19:50.629 [2024-07-16 01:13:06.544820] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:19:50.629 [2024-07-16 01:13:06.544890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:50.629 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.629 [2024-07-16 01:13:06.608008] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.887 [2024-07-16 01:13:06.718738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4191892 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:52.812 01:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:53.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4191892 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4191712 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.379 "adrfam": "ipv4", 00:19:53.379 "trsvcid": "$NVMF_PORT", 00:19:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.379 "hdgst": ${hdgst:-false}, 00:19:53.379 "ddgst": ${ddgst:-false} 00:19:53.379 }, 00:19:53.379 "method": "bdev_nvme_attach_controller" 00:19:53.379 } 00:19:53.379 EOF 00:19:53.379 )") 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.379 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.379 { 00:19:53.379 "params": { 00:19:53.379 "name": "Nvme$subsystem", 00:19:53.379 "trtype": "$TEST_TRANSPORT", 00:19:53.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "$NVMF_PORT", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.380 "hdgst": ${hdgst:-false}, 00:19:53.380 "ddgst": ${ddgst:-false} 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 } 00:19:53.380 EOF 00:19:53.380 )") 00:19:53.380 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.380 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.380 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.380 { 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme$subsystem", 00:19:53.380 "trtype": "$TEST_TRANSPORT", 00:19:53.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "$NVMF_PORT", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.380 "hdgst": ${hdgst:-false}, 00:19:53.380 "ddgst": ${ddgst:-false} 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 } 00:19:53.380 EOF 00:19:53.380 )") 00:19:53.380 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.380 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:53.380 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:53.380 01:13:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme1", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme2", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme3", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme4", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme5", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme6", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme7", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme8", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme9", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 },{ 00:19:53.380 "params": { 00:19:53.380 "name": "Nvme10", 00:19:53.380 "trtype": "tcp", 00:19:53.380 "traddr": "10.0.0.2", 00:19:53.380 "adrfam": "ipv4", 00:19:53.380 "trsvcid": "4420", 00:19:53.380 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:53.380 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:53.380 "hdgst": false, 00:19:53.380 "ddgst": false 00:19:53.380 }, 00:19:53.380 "method": "bdev_nvme_attach_controller" 00:19:53.380 }' 00:19:53.380 [2024-07-16 01:13:09.340165] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:19:53.380 [2024-07-16 01:13:09.340246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4192201 ] 00:19:53.638 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.638 [2024-07-16 01:13:09.407165] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.638 [2024-07-16 01:13:09.521886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.535 Running I/O for 1 seconds... 00:19:56.469 00:19:56.469 Latency(us) 00:19:56.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.469 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme1n1 : 1.07 238.72 14.92 0.00 0.00 265188.50 17864.63 233016.89 00:19:56.469 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme2n1 : 1.05 183.27 11.45 0.00 0.00 339555.62 21068.61 278066.82 00:19:56.469 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme3n1 : 1.18 259.47 16.22 0.00 0.00 227122.23 8446.86 253211.69 00:19:56.469 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme4n1 : 1.12 249.61 15.60 0.00 0.00 231989.48 4975.88 253211.69 00:19:56.469 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme5n1 : 1.12 228.53 14.28 0.00 0.00 258965.05 20194.80 250104.79 00:19:56.469 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme6n1 : 1.20 265.73 16.61 0.00 0.00 220281.82 20971.52 267192.70 00:19:56.469 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme7n1 : 1.19 267.80 16.74 0.00 0.00 214828.22 17379.18 226803.11 00:19:56.469 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme8n1 : 1.20 266.95 16.68 0.00 0.00 212095.43 17961.72 245444.46 00:19:56.469 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme9n1 : 1.18 216.55 13.53 0.00 0.00 256527.36 19612.25 248551.35 00:19:56.469 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.469 Verification LBA range: start 0x0 length 0x400 00:19:56.469 Nvme10n1 : 1.21 264.88 16.56 0.00 0.00 206837.68 16505.36 248551.35 00:19:56.469 =================================================================================================================== 00:19:56.469 Total : 2441.51 152.59 0.00 0.00 238080.79 4975.88 278066.82 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.727 rmmod nvme_tcp 00:19:56.727 rmmod nvme_fabrics 00:19:56.727 rmmod nvme_keyring 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4191712 ']' 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4191712 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 4191712 ']' 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 4191712 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4191712 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4191712' 00:19:56.727 killing process with pid 4191712 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 4191712 00:19:56.727 01:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 4191712 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.294 01:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.192 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.192 00:19:59.192 real 0m11.842s 00:19:59.192 user 0m33.748s 00:19:59.192 sys 0m3.351s 00:19:59.192 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.192 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:59.192 ************************************ 00:19:59.192 END TEST nvmf_shutdown_tc1 00:19:59.192 ************************************ 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:59.450 ************************************ 00:19:59.450 START TEST nvmf_shutdown_tc2 00:19:59.450 ************************************ 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.450 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:59.451 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:59.451 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:59.451 Found net devices under 0000:09:00.0: cvl_0_0 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:59.451 Found net devices under 0000:09:00.1: cvl_0_1 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:19:59.451 00:19:59.451 --- 10.0.0.2 ping statistics --- 00:19:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.451 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:19:59.451 00:19:59.451 --- 10.0.0.1 ping statistics --- 00:19:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.451 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4193070 00:19:59.451 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:59.452 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4193070 00:19:59.452 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4193070 ']' 00:19:59.452 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.452 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.452 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.452 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.452 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.709 [2024-07-16 01:13:15.445894] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:19:59.709 [2024-07-16 01:13:15.445986] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.709 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.709 [2024-07-16 01:13:15.513213] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.709 [2024-07-16 01:13:15.620351] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.709 [2024-07-16 01:13:15.620413] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.709 [2024-07-16 01:13:15.620442] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.709 [2024-07-16 01:13:15.620454] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.709 [2024-07-16 01:13:15.620463] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.709 [2024-07-16 01:13:15.620592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.709 [2024-07-16 01:13:15.620658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.709 [2024-07-16 01:13:15.620731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.709 [2024-07-16 01:13:15.620729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:59.967 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.967 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:59.967 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.967 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.967 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.967 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.968 [2024-07-16 01:13:15.773586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.968 01:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.968 Malloc1 00:19:59.968 [2024-07-16 01:13:15.862411] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.968 Malloc2 00:19:59.968 Malloc3 00:20:00.226 Malloc4 00:20:00.226 Malloc5 00:20:00.226 Malloc6 00:20:00.226 Malloc7 00:20:00.226 Malloc8 00:20:00.484 Malloc9 00:20:00.484 Malloc10 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4193247 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4193247 /var/tmp/bdevperf.sock 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4193247 ']' 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.484 "hdgst": ${hdgst:-false}, 00:20:00.484 "ddgst": ${ddgst:-false} 00:20:00.484 }, 00:20:00.484 "method": "bdev_nvme_attach_controller" 00:20:00.484 } 00:20:00.484 EOF 00:20:00.484 )") 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.484 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.484 { 00:20:00.484 "params": { 00:20:00.484 "name": "Nvme$subsystem", 00:20:00.484 "trtype": "$TEST_TRANSPORT", 00:20:00.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.484 "adrfam": "ipv4", 00:20:00.484 "trsvcid": "$NVMF_PORT", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.485 "hdgst": ${hdgst:-false}, 00:20:00.485 "ddgst": ${ddgst:-false} 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 } 00:20:00.485 EOF 00:20:00.485 )") 00:20:00.485 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.485 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.485 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.485 { 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme$subsystem", 00:20:00.485 "trtype": "$TEST_TRANSPORT", 00:20:00.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "$NVMF_PORT", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.485 "hdgst": ${hdgst:-false}, 00:20:00.485 "ddgst": ${ddgst:-false} 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 } 00:20:00.485 EOF 00:20:00.485 )") 00:20:00.485 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:00.485 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:00.485 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:00.485 01:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme1", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme2", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme3", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme4", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme5", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme6", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme7", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme8", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme9", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 },{ 00:20:00.485 "params": { 00:20:00.485 "name": "Nvme10", 00:20:00.485 "trtype": "tcp", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "adrfam": "ipv4", 00:20:00.485 "trsvcid": "4420", 00:20:00.485 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:00.485 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:00.485 "hdgst": false, 00:20:00.485 "ddgst": false 00:20:00.485 }, 00:20:00.485 "method": "bdev_nvme_attach_controller" 00:20:00.485 }' 00:20:00.485 [2024-07-16 01:13:16.385061] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:00.485 [2024-07-16 01:13:16.385137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193247 ] 00:20:00.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.485 [2024-07-16 01:13:16.447612] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.743 [2024-07-16 01:13:16.557825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.638 Running I/O for 10 seconds... 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4193247 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4193247 ']' 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4193247 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4193247 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4193247' 00:20:03.204 killing process with pid 4193247 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4193247 00:20:03.204 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4193247 00:20:03.462 Received shutdown signal, test time was about 0.933847 seconds 00:20:03.462 00:20:03.462 Latency(us) 00:20:03.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.462 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme1n1 : 0.87 220.22 13.76 0.00 0.00 287049.96 24563.86 256318.58 00:20:03.462 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme2n1 : 0.85 226.27 14.14 0.00 0.00 272935.19 21068.61 254765.13 00:20:03.462 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme3n1 : 0.89 286.10 17.88 0.00 0.00 210942.48 17670.45 234570.33 00:20:03.462 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme4n1 : 0.89 288.00 18.00 0.00 0.00 205666.23 17767.54 240784.12 00:20:03.462 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme5n1 : 0.88 217.60 13.60 0.00 0.00 266047.84 22039.51 256318.58 00:20:03.462 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme6n1 : 0.88 219.24 13.70 0.00 0.00 257934.73 21262.79 254765.13 00:20:03.462 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme7n1 : 0.86 222.13 13.88 0.00 0.00 247778.67 37865.24 250104.79 00:20:03.462 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme8n1 : 0.86 223.60 13.97 0.00 0.00 240205.37 18641.35 251658.24 00:20:03.462 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme9n1 : 0.93 205.79 12.86 0.00 0.00 246462.39 25437.68 293601.28 00:20:03.462 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.462 Verification LBA range: start 0x0 length 0x400 00:20:03.462 Nvme10n1 : 0.88 223.95 14.00 0.00 0.00 228407.36 4757.43 253211.69 00:20:03.462 =================================================================================================================== 00:20:03.462 Total : 2332.88 145.81 0.00 0.00 243920.16 4757.43 293601.28 00:20:03.719 01:13:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4193070 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.651 rmmod nvme_tcp 00:20:04.651 rmmod nvme_fabrics 00:20:04.651 rmmod nvme_keyring 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4193070 ']' 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4193070 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4193070 ']' 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4193070 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:04.651 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.909 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4193070 00:20:04.909 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:04.909 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:04.909 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4193070' 00:20:04.909 killing process with pid 4193070 00:20:04.909 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4193070 00:20:04.909 01:13:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4193070 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.496 01:13:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.398 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.398 00:20:07.398 real 0m7.994s 00:20:07.398 user 0m24.685s 00:20:07.398 sys 0m1.583s 00:20:07.398 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.398 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.398 ************************************ 00:20:07.398 END TEST nvmf_shutdown_tc2 00:20:07.398 ************************************ 00:20:07.398 01:13:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:07.398 01:13:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:07.399 ************************************ 00:20:07.399 START TEST nvmf_shutdown_tc3 00:20:07.399 ************************************ 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:07.399 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:07.399 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:07.399 Found net devices under 0000:09:00.0: cvl_0_0 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:07.399 Found net devices under 0000:09:00.1: cvl_0_1 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.399 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:07.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:20:07.657 00:20:07.657 --- 10.0.0.2 ping statistics --- 00:20:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.657 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:20:07.657 00:20:07.657 --- 10.0.0.1 ping statistics --- 00:20:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.657 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4194156 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4194156 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4194156 ']' 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.657 01:13:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.657 [2024-07-16 01:13:23.509548] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:07.657 [2024-07-16 01:13:23.509645] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.657 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.657 [2024-07-16 01:13:23.573590] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.915 [2024-07-16 01:13:23.682612] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.915 [2024-07-16 01:13:23.682668] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.915 [2024-07-16 01:13:23.682695] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.915 [2024-07-16 01:13:23.682706] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.915 [2024-07-16 01:13:23.682715] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.915 [2024-07-16 01:13:23.682802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.915 [2024-07-16 01:13:23.682909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.915 [2024-07-16 01:13:23.682998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.915 [2024-07-16 01:13:23.683002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.478 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.478 [2024-07-16 01:13:24.469737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.736 01:13:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.736 Malloc1 00:20:08.736 [2024-07-16 01:13:24.558540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.736 Malloc2 00:20:08.736 Malloc3 00:20:08.736 Malloc4 00:20:08.736 Malloc5 00:20:08.994 Malloc6 00:20:08.994 Malloc7 00:20:08.994 Malloc8 00:20:08.994 Malloc9 00:20:08.994 Malloc10 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=363 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 363 /var/tmp/bdevperf.sock 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 363 ']' 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.253 { 00:20:09.253 "params": { 00:20:09.253 "name": "Nvme$subsystem", 00:20:09.253 "trtype": "$TEST_TRANSPORT", 00:20:09.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.253 "adrfam": "ipv4", 00:20:09.253 "trsvcid": "$NVMF_PORT", 00:20:09.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.253 "hdgst": ${hdgst:-false}, 00:20:09.253 "ddgst": ${ddgst:-false} 00:20:09.253 }, 00:20:09.253 "method": "bdev_nvme_attach_controller" 00:20:09.253 } 00:20:09.253 EOF 00:20:09.253 )") 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.253 { 00:20:09.253 "params": { 00:20:09.253 "name": "Nvme$subsystem", 00:20:09.253 "trtype": "$TEST_TRANSPORT", 00:20:09.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.253 "adrfam": "ipv4", 00:20:09.253 "trsvcid": "$NVMF_PORT", 00:20:09.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.253 "hdgst": ${hdgst:-false}, 00:20:09.253 "ddgst": ${ddgst:-false} 00:20:09.253 }, 00:20:09.253 "method": "bdev_nvme_attach_controller" 00:20:09.253 } 00:20:09.253 EOF 00:20:09.253 )") 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.253 { 00:20:09.253 "params": { 00:20:09.253 "name": "Nvme$subsystem", 00:20:09.253 "trtype": "$TEST_TRANSPORT", 00:20:09.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.253 "adrfam": "ipv4", 00:20:09.253 "trsvcid": "$NVMF_PORT", 00:20:09.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.253 "hdgst": ${hdgst:-false}, 00:20:09.253 "ddgst": ${ddgst:-false} 00:20:09.253 }, 00:20:09.253 "method": "bdev_nvme_attach_controller" 00:20:09.253 } 00:20:09.253 EOF 00:20:09.253 )") 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.253 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.253 { 00:20:09.253 "params": { 00:20:09.253 "name": "Nvme$subsystem", 00:20:09.253 "trtype": "$TEST_TRANSPORT", 00:20:09.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.253 "adrfam": "ipv4", 00:20:09.253 "trsvcid": "$NVMF_PORT", 00:20:09.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.254 "hdgst": ${hdgst:-false}, 00:20:09.254 "ddgst": ${ddgst:-false} 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 } 00:20:09.254 EOF 00:20:09.254 )") 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.254 { 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme$subsystem", 00:20:09.254 "trtype": "$TEST_TRANSPORT", 00:20:09.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "$NVMF_PORT", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.254 "hdgst": ${hdgst:-false}, 00:20:09.254 "ddgst": ${ddgst:-false} 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 } 00:20:09.254 EOF 00:20:09.254 )") 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.254 { 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme$subsystem", 00:20:09.254 "trtype": "$TEST_TRANSPORT", 00:20:09.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "$NVMF_PORT", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.254 "hdgst": ${hdgst:-false}, 00:20:09.254 "ddgst": ${ddgst:-false} 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 } 00:20:09.254 EOF 00:20:09.254 )") 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.254 { 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme$subsystem", 00:20:09.254 "trtype": "$TEST_TRANSPORT", 00:20:09.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "$NVMF_PORT", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.254 "hdgst": ${hdgst:-false}, 00:20:09.254 "ddgst": ${ddgst:-false} 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 } 00:20:09.254 EOF 00:20:09.254 )") 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.254 { 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme$subsystem", 00:20:09.254 "trtype": "$TEST_TRANSPORT", 00:20:09.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "$NVMF_PORT", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.254 "hdgst": ${hdgst:-false}, 00:20:09.254 "ddgst": ${ddgst:-false} 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 } 00:20:09.254 EOF 00:20:09.254 )") 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.254 { 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme$subsystem", 00:20:09.254 "trtype": "$TEST_TRANSPORT", 00:20:09.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "$NVMF_PORT", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.254 "hdgst": ${hdgst:-false}, 00:20:09.254 "ddgst": ${ddgst:-false} 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 } 00:20:09.254 EOF 00:20:09.254 )") 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.254 { 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme$subsystem", 00:20:09.254 "trtype": "$TEST_TRANSPORT", 00:20:09.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "$NVMF_PORT", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.254 "hdgst": ${hdgst:-false}, 00:20:09.254 "ddgst": ${ddgst:-false} 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 } 00:20:09.254 EOF 00:20:09.254 )") 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:09.254 01:13:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme1", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme2", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme3", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme4", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme5", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme6", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme7", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme8", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme9", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.254 "trsvcid": "4420", 00:20:09.254 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:09.254 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:09.254 "hdgst": false, 00:20:09.254 "ddgst": false 00:20:09.254 }, 00:20:09.254 "method": "bdev_nvme_attach_controller" 00:20:09.254 },{ 00:20:09.254 "params": { 00:20:09.254 "name": "Nvme10", 00:20:09.254 "trtype": "tcp", 00:20:09.254 "traddr": "10.0.0.2", 00:20:09.254 "adrfam": "ipv4", 00:20:09.255 "trsvcid": "4420", 00:20:09.255 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:09.255 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:09.255 "hdgst": false, 00:20:09.255 "ddgst": false 00:20:09.255 }, 00:20:09.255 "method": "bdev_nvme_attach_controller" 00:20:09.255 }' 00:20:09.255 [2024-07-16 01:13:25.077900] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:09.255 [2024-07-16 01:13:25.078010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363 ] 00:20:09.255 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.255 [2024-07-16 01:13:25.140162] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.513 [2024-07-16 01:13:25.252364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.886 Running I/O for 10 seconds... 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:11.149 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:11.448 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:11.448 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:11.448 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:11.448 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:11.448 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.448 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.448 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=152 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 152 -ge 100 ']' 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4194156 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 4194156 ']' 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 4194156 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4194156 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4194156' 00:20:11.725 killing process with pid 4194156 00:20:11.725 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 4194156 00:20:11.726 01:13:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 4194156 00:20:11.726 [2024-07-16 01:13:27.452371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.452993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.453334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53d5c0 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.454995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.455007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.455019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.455031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.455043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.726 [2024-07-16 01:13:27.455055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.455465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f520 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.459999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.727 [2024-07-16 01:13:27.460186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.460199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.460211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.460223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.460235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.460247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e480 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.461969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e960 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.728 [2024-07-16 01:13:27.463405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.463883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x774f70 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.729 [2024-07-16 01:13:27.465977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.465989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.466206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x775450 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.467995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.730 [2024-07-16 01:13:27.468117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ecd0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.468991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.469598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f1b0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.480698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.480777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.480797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.480811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.480844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.480859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.480873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.480886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.480898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda2240 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.480981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.481017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.481044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.481070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.481096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd866c0 is same with the state(5) to be set 00:20:11.731 [2024-07-16 01:13:27.481142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.481186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.481230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.731 [2024-07-16 01:13:27.481257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.731 [2024-07-16 01:13:27.481274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b330 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.481347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c610 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.481516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe29390 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.481687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf25db0 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.481860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.481977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.481990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe290f0 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.482036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97010 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.482195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5aab0 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.482371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.732 [2024-07-16 01:13:27.482471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf264c0 is same with the state(5) to be set 00:20:11.732 [2024-07-16 01:13:27.482655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.482980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.482996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.732 [2024-07-16 01:13:27.483009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.732 [2024-07-16 01:13:27.483024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.483974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.733 [2024-07-16 01:13:27.484301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.733 [2024-07-16 01:13:27.484316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484704] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfb7070 was disconnected and freed. reset controller. 00:20:11.734 [2024-07-16 01:13:27.484778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.484972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.484993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.734 [2024-07-16 01:13:27.485584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.734 [2024-07-16 01:13:27.485599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.485974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.485989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.735 [2024-07-16 01:13:27.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.735 [2024-07-16 01:13:27.486818] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfb8540 was disconnected and freed. reset controller. 00:20:11.735 [2024-07-16 01:13:27.490367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:11.735 [2024-07-16 01:13:27.490410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:11.735 [2024-07-16 01:13:27.490439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd866c0 (9): Bad file descriptor 00:20:11.735 [2024-07-16 01:13:27.490462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf264c0 (9): Bad file descriptor 00:20:11.735 [2024-07-16 01:13:27.491095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda2240 (9): Bad file descriptor 00:20:11.735 [2024-07-16 01:13:27.491139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b330 (9): Bad file descriptor 00:20:11.735 [2024-07-16 01:13:27.491170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c610 (9): Bad file descriptor 00:20:11.735 [2024-07-16 01:13:27.491201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe29390 (9): Bad file descriptor 00:20:11.735 [2024-07-16 01:13:27.491235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf25db0 (9): Bad file descriptor 00:20:11.735 [2024-07-16 01:13:27.491281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe290f0 (9): Bad file descriptor 00:20:11.736 [2024-07-16 01:13:27.491320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97010 (9): Bad file descriptor 00:20:11.736 [2024-07-16 01:13:27.491350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5aab0 (9): Bad file descriptor 00:20:11.736 [2024-07-16 01:13:27.492072] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:11.736 [2024-07-16 01:13:27.492155] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:11.736 [2024-07-16 01:13:27.492236] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:11.736 [2024-07-16 01:13:27.492326] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:11.736 [2024-07-16 01:13:27.492406] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:11.736 [2024-07-16 01:13:27.492500] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:11.736 [2024-07-16 01:13:27.492574] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:11.736 [2024-07-16 01:13:27.492772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.736 [2024-07-16 01:13:27.492803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf264c0 with addr=10.0.0.2, port=4420 00:20:11.736 [2024-07-16 01:13:27.492821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf264c0 is same with the state(5) to be set 00:20:11.736 [2024-07-16 01:13:27.492920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.736 [2024-07-16 01:13:27.492945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd866c0 with addr=10.0.0.2, port=4420 00:20:11.736 [2024-07-16 01:13:27.492972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd866c0 is same with the state(5) to be set 00:20:11.736 [2024-07-16 01:13:27.493053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.493985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.493998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.494013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.494031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.494047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.494061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.494076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.494089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.494104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.494118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.494133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.736 [2024-07-16 01:13:27.494146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.736 [2024-07-16 01:13:27.494162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.494979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.494993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.495008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89e50 is same with the state(5) to be set 00:20:11.737 [2024-07-16 01:13:27.495088] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf89e50 was disconnected and freed. reset controller. 00:20:11.737 [2024-07-16 01:13:27.495234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf264c0 (9): Bad file descriptor 00:20:11.737 [2024-07-16 01:13:27.495263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd866c0 (9): Bad file descriptor 00:20:11.737 [2024-07-16 01:13:27.496510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.737 [2024-07-16 01:13:27.496552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:11.737 [2024-07-16 01:13:27.496570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:11.737 [2024-07-16 01:13:27.496587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:11.737 [2024-07-16 01:13:27.496607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:11.737 [2024-07-16 01:13:27.496620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:11.737 [2024-07-16 01:13:27.496633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:11.737 [2024-07-16 01:13:27.496713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.737 [2024-07-16 01:13:27.496734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.737 [2024-07-16 01:13:27.496864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.737 [2024-07-16 01:13:27.496890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd5aab0 with addr=10.0.0.2, port=4420 00:20:11.737 [2024-07-16 01:13:27.496906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5aab0 is same with the state(5) to be set 00:20:11.737 [2024-07-16 01:13:27.497267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5aab0 (9): Bad file descriptor 00:20:11.737 [2024-07-16 01:13:27.497339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.737 [2024-07-16 01:13:27.497359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.737 [2024-07-16 01:13:27.497373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.737 [2024-07-16 01:13:27.497438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.737 [2024-07-16 01:13:27.501298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.501330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.501359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.501374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.501391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.501404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.501420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.501433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.501448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.501461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.501477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.737 [2024-07-16 01:13:27.501490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.737 [2024-07-16 01:13:27.501505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.501980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.501996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.738 [2024-07-16 01:13:27.502554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.738 [2024-07-16 01:13:27.502569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.502976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.502992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.503249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.503263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9aa0 is same with the state(5) to be set 00:20:11.739 [2024-07-16 01:13:27.504551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.504984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.504999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.505012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.505028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.505041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.505056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.505069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.505085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.505098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.505113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.739 [2024-07-16 01:13:27.505126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.739 [2024-07-16 01:13:27.505141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.505971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.505986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.740 [2024-07-16 01:13:27.506430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.740 [2024-07-16 01:13:27.506445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.506458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.506473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.506486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.506500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbaf70 is same with the state(5) to be set 00:20:11.741 [2024-07-16 01:13:27.507791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.507815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.507835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.507850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.507866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.507879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.507895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.507908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.507923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.507937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.507971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.507987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.508986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.508999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.741 [2024-07-16 01:13:27.509015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.741 [2024-07-16 01:13:27.509029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.509711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.509726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbc480 is same with the state(5) to be set 00:20:11.742 [2024-07-16 01:13:27.511044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.742 [2024-07-16 01:13:27.511413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.742 [2024-07-16 01:13:27.511430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.511974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.511990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.743 [2024-07-16 01:13:27.512675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.743 [2024-07-16 01:13:27.512688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.512962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.512978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4f6b0 is same with the state(5) to be set 00:20:11.744 [2024-07-16 01:13:27.514229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.514974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.514988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.744 [2024-07-16 01:13:27.515208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.744 [2024-07-16 01:13:27.515221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.515972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.515986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.516001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.516014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.516029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.516042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.516057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.516070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.516084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.516099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.516114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.516129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.516147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50b70 is same with the state(5) to be set 00:20:11.745 [2024-07-16 01:13:27.517383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.745 [2024-07-16 01:13:27.517696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.745 [2024-07-16 01:13:27.517711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.517972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.517988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.746 [2024-07-16 01:13:27.518775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.746 [2024-07-16 01:13:27.518790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.518803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.518818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.518831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.518850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.518863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.518878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.518891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.518907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.518920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.518935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.518948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.518970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.518984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.518999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.519240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.519260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd51f30 is same with the state(5) to be set 00:20:11.747 [2024-07-16 01:13:27.521728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.521982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.521997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.747 [2024-07-16 01:13:27.522508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.747 [2024-07-16 01:13:27.522523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.522979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.522994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.748 [2024-07-16 01:13:27.523624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.748 [2024-07-16 01:13:27.523638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53470 is same with the state(5) to be set 00:20:11.748 [2024-07-16 01:13:27.525244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:11.748 [2024-07-16 01:13:27.525281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:11.748 [2024-07-16 01:13:27.525299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:11.748 [2024-07-16 01:13:27.525387] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.748 [2024-07-16 01:13:27.525417] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.748 [2024-07-16 01:13:27.525436] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.748 [2024-07-16 01:13:27.525458] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.748 [2024-07-16 01:13:27.525478] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.748 [2024-07-16 01:13:27.525496] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.748 [2024-07-16 01:13:27.525593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:11.748 [2024-07-16 01:13:27.525617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:11.749 [2024-07-16 01:13:27.525634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:11.749 task offset: 24576 on job bdev=Nvme2n1 fails 00:20:11.749 00:20:11.749 Latency(us) 00:20:11.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.749 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme1n1 ended in about 0.90 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme1n1 : 0.90 217.33 13.58 70.97 0.00 219469.42 18544.26 240784.12 00:20:11.749 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme2n1 ended in about 0.89 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme2n1 : 0.89 214.73 13.42 71.58 0.00 216470.00 18155.90 253211.69 00:20:11.749 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme3n1 ended in about 0.90 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme3n1 : 0.90 214.45 13.40 71.48 0.00 212135.82 9563.40 257872.02 00:20:11.749 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme4n1 ended in about 0.91 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme4n1 : 0.91 215.42 13.46 70.34 0.00 207906.89 17670.45 253211.69 00:20:11.749 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme5n1 ended in about 0.91 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme5n1 : 0.91 140.19 8.76 70.10 0.00 276671.15 23592.96 259425.47 00:20:11.749 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme6n1 ended in about 0.92 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme6n1 : 0.92 139.70 8.73 69.85 0.00 271738.25 38836.15 220589.32 00:20:11.749 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme7n1 ended in about 0.92 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme7n1 : 0.92 139.21 8.70 69.60 0.00 266963.12 21068.61 237677.23 00:20:11.749 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme8n1 ended in about 0.92 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme8n1 : 0.92 138.73 8.67 69.36 0.00 262107.59 19029.71 271853.04 00:20:11.749 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme9n1 ended in about 0.93 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme9n1 : 0.93 138.26 8.64 69.13 0.00 257172.42 20777.34 254765.13 00:20:11.749 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.749 Job: Nvme10n1 ended in about 0.93 seconds with error 00:20:11.749 Verification LBA range: start 0x0 length 0x400 00:20:11.749 Nvme10n1 : 0.93 137.61 8.60 68.81 0.00 252730.79 22719.15 278066.82 00:20:11.749 =================================================================================================================== 00:20:11.749 Total : 1695.63 105.98 701.22 0.00 240667.82 9563.40 278066.82 00:20:11.749 [2024-07-16 01:13:27.552250] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:11.749 [2024-07-16 01:13:27.552332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:11.749 [2024-07-16 01:13:27.552649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.552686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b330 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.552708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b330 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.552807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.552832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd97010 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.552847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97010 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.552943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.552976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c610 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.552993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c610 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.554951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:11.749 [2024-07-16 01:13:27.554986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:11.749 [2024-07-16 01:13:27.555158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.555199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe290f0 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.555216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe290f0 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.555312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.555337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe29390 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.555352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe29390 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.555434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.555459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda2240 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.555474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda2240 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.555559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.555584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf25db0 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.555600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf25db0 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.555624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b330 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.555647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97010 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.555664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c610 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.555711] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.749 [2024-07-16 01:13:27.555739] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.749 [2024-07-16 01:13:27.555762] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.749 [2024-07-16 01:13:27.555780] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:11.749 [2024-07-16 01:13:27.555862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.749 [2024-07-16 01:13:27.556022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.556050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd866c0 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.556066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd866c0 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.556154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.556179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf264c0 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.556194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf264c0 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.556211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe290f0 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.556229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe29390 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.556247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda2240 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.556263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf25db0 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.556279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:11.749 [2024-07-16 01:13:27.556297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:11.749 [2024-07-16 01:13:27.556313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:11.749 [2024-07-16 01:13:27.556333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:11.749 [2024-07-16 01:13:27.556347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:11.749 [2024-07-16 01:13:27.556360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:11.749 [2024-07-16 01:13:27.556377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:11.749 [2024-07-16 01:13:27.556389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:11.749 [2024-07-16 01:13:27.556402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:11.749 [2024-07-16 01:13:27.556508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.749 [2024-07-16 01:13:27.556528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.749 [2024-07-16 01:13:27.556540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.749 [2024-07-16 01:13:27.556623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.749 [2024-07-16 01:13:27.556648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd5aab0 with addr=10.0.0.2, port=4420 00:20:11.749 [2024-07-16 01:13:27.556663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5aab0 is same with the state(5) to be set 00:20:11.749 [2024-07-16 01:13:27.556680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd866c0 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.556698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf264c0 (9): Bad file descriptor 00:20:11.749 [2024-07-16 01:13:27.556714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:11.749 [2024-07-16 01:13:27.556727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:11.749 [2024-07-16 01:13:27.556740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:11.749 [2024-07-16 01:13:27.556757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:11.749 [2024-07-16 01:13:27.556771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:11.749 [2024-07-16 01:13:27.556783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:11.749 [2024-07-16 01:13:27.556799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:11.749 [2024-07-16 01:13:27.556812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:11.750 [2024-07-16 01:13:27.556824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:11.750 [2024-07-16 01:13:27.556840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:11.750 [2024-07-16 01:13:27.556853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:11.750 [2024-07-16 01:13:27.556865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:11.750 [2024-07-16 01:13:27.556902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.750 [2024-07-16 01:13:27.556919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.750 [2024-07-16 01:13:27.556936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.750 [2024-07-16 01:13:27.556961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.750 [2024-07-16 01:13:27.556978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5aab0 (9): Bad file descriptor 00:20:11.750 [2024-07-16 01:13:27.556994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:11.750 [2024-07-16 01:13:27.557007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:11.750 [2024-07-16 01:13:27.557020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:11.750 [2024-07-16 01:13:27.557037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:11.750 [2024-07-16 01:13:27.557050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:11.750 [2024-07-16 01:13:27.557063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:11.750 [2024-07-16 01:13:27.557100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.750 [2024-07-16 01:13:27.557116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.750 [2024-07-16 01:13:27.557128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.750 [2024-07-16 01:13:27.557140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.750 [2024-07-16 01:13:27.557153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.750 [2024-07-16 01:13:27.557188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.317 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:12.317 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 363 00:20:13.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (363) - No such process 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.252 rmmod nvme_tcp 00:20:13.252 rmmod nvme_fabrics 00:20:13.252 rmmod nvme_keyring 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.252 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.253 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.253 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.253 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.253 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.253 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.253 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.152 01:13:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.152 00:20:15.152 real 0m7.866s 00:20:15.152 user 0m19.654s 00:20:15.152 sys 0m1.436s 00:20:15.152 01:13:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.152 01:13:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.152 ************************************ 00:20:15.152 END TEST nvmf_shutdown_tc3 00:20:15.152 ************************************ 00:20:15.412 01:13:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:15.412 01:13:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:15.412 00:20:15.412 real 0m27.936s 00:20:15.412 user 1m18.184s 00:20:15.412 sys 0m6.521s 00:20:15.412 01:13:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.412 01:13:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:15.412 ************************************ 00:20:15.412 END TEST nvmf_shutdown 00:20:15.412 ************************************ 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:15.412 01:13:31 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.412 01:13:31 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.412 01:13:31 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:15.412 01:13:31 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.412 01:13:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.412 ************************************ 00:20:15.412 START TEST nvmf_multicontroller 00:20:15.412 ************************************ 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:15.412 * Looking for test storage... 00:20:15.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.412 01:13:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:17.946 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.946 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:17.947 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:17.947 Found net devices under 0000:09:00.0: cvl_0_0 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:17.947 Found net devices under 0000:09:00.1: cvl_0_1 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:20:17.947 00:20:17.947 --- 10.0.0.2 ping statistics --- 00:20:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.947 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:20:17.947 00:20:17.947 --- 10.0.0.1 ping statistics --- 00:20:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.947 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3096 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3096 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3096 ']' 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.947 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:17.947 [2024-07-16 01:13:33.656255] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:17.947 [2024-07-16 01:13:33.656373] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.947 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.947 [2024-07-16 01:13:33.721615] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:17.947 [2024-07-16 01:13:33.833162] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.947 [2024-07-16 01:13:33.833241] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.947 [2024-07-16 01:13:33.833255] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.947 [2024-07-16 01:13:33.833267] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.947 [2024-07-16 01:13:33.833277] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.947 [2024-07-16 01:13:33.833404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.947 [2024-07-16 01:13:33.833467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.947 [2024-07-16 01:13:33.833471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 [2024-07-16 01:13:33.975058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 Malloc0 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 [2024-07-16 01:13:34.032685] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 [2024-07-16 01:13:34.040552] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 Malloc1 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3140 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3140 /var/tmp/bdevperf.sock 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3140 ']' 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.205 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.206 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.206 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.206 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.463 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.463 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:18.463 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:18.463 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.463 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.721 NVMe0n1 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.721 1 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.721 request: 00:20:18.721 { 00:20:18.721 "name": "NVMe0", 00:20:18.721 "trtype": "tcp", 00:20:18.721 "traddr": "10.0.0.2", 00:20:18.721 "adrfam": "ipv4", 00:20:18.721 "trsvcid": "4420", 00:20:18.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.721 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:18.721 "hostaddr": "10.0.0.2", 00:20:18.721 "hostsvcid": "60000", 00:20:18.721 "prchk_reftag": false, 00:20:18.721 "prchk_guard": false, 00:20:18.721 "hdgst": false, 00:20:18.721 "ddgst": false, 00:20:18.721 "method": "bdev_nvme_attach_controller", 00:20:18.721 "req_id": 1 00:20:18.721 } 00:20:18.721 Got JSON-RPC error response 00:20:18.721 response: 00:20:18.721 { 00:20:18.721 "code": -114, 00:20:18.721 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:18.721 } 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.721 request: 00:20:18.721 { 00:20:18.721 "name": "NVMe0", 00:20:18.721 "trtype": "tcp", 00:20:18.721 "traddr": "10.0.0.2", 00:20:18.721 "adrfam": "ipv4", 00:20:18.721 "trsvcid": "4420", 00:20:18.721 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:18.721 "hostaddr": "10.0.0.2", 00:20:18.721 "hostsvcid": "60000", 00:20:18.721 "prchk_reftag": false, 00:20:18.721 "prchk_guard": false, 00:20:18.721 "hdgst": false, 00:20:18.721 "ddgst": false, 00:20:18.721 "method": "bdev_nvme_attach_controller", 00:20:18.721 "req_id": 1 00:20:18.721 } 00:20:18.721 Got JSON-RPC error response 00:20:18.721 response: 00:20:18.721 { 00:20:18.721 "code": -114, 00:20:18.721 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:18.721 } 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.721 request: 00:20:18.721 { 00:20:18.721 "name": "NVMe0", 00:20:18.721 "trtype": "tcp", 00:20:18.721 "traddr": "10.0.0.2", 00:20:18.721 "adrfam": "ipv4", 00:20:18.721 "trsvcid": "4420", 00:20:18.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.721 "hostaddr": "10.0.0.2", 00:20:18.721 "hostsvcid": "60000", 00:20:18.721 "prchk_reftag": false, 00:20:18.721 "prchk_guard": false, 00:20:18.721 "hdgst": false, 00:20:18.721 "ddgst": false, 00:20:18.721 "multipath": "disable", 00:20:18.721 "method": "bdev_nvme_attach_controller", 00:20:18.721 "req_id": 1 00:20:18.721 } 00:20:18.721 Got JSON-RPC error response 00:20:18.721 response: 00:20:18.721 { 00:20:18.721 "code": -114, 00:20:18.721 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:18.721 } 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:18.721 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.722 request: 00:20:18.722 { 00:20:18.722 "name": "NVMe0", 00:20:18.722 "trtype": "tcp", 00:20:18.722 "traddr": "10.0.0.2", 00:20:18.722 "adrfam": "ipv4", 00:20:18.722 "trsvcid": "4420", 00:20:18.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.722 "hostaddr": "10.0.0.2", 00:20:18.722 "hostsvcid": "60000", 00:20:18.722 "prchk_reftag": false, 00:20:18.722 "prchk_guard": false, 00:20:18.722 "hdgst": false, 00:20:18.722 "ddgst": false, 00:20:18.722 "multipath": "failover", 00:20:18.722 "method": "bdev_nvme_attach_controller", 00:20:18.722 "req_id": 1 00:20:18.722 } 00:20:18.722 Got JSON-RPC error response 00:20:18.722 response: 00:20:18.722 { 00:20:18.722 "code": -114, 00:20:18.722 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:18.722 } 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.722 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.979 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.979 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:18.979 01:13:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:20.352 0 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3140 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3140 ']' 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3140 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3140 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3140' 00:20:20.352 killing process with pid 3140 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3140 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3140 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:20.352 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:20.352 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:20.352 [2024-07-16 01:13:34.145697] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:20.352 [2024-07-16 01:13:34.145782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140 ] 00:20:20.352 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.352 [2024-07-16 01:13:34.209390] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.353 [2024-07-16 01:13:34.319625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.353 [2024-07-16 01:13:34.852012] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name c236fe4f-4367-4224-8f2c-bbf994ac8323 already exists 00:20:20.353 [2024-07-16 01:13:34.852055] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:c236fe4f-4367-4224-8f2c-bbf994ac8323 alias for bdev NVMe1n1 00:20:20.353 [2024-07-16 01:13:34.852071] bdev_nvme.c:4322:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:20.353 Running I/O for 1 seconds... 00:20:20.353 00:20:20.353 Latency(us) 00:20:20.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.353 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:20.353 NVMe0n1 : 1.00 18658.28 72.88 0.00 0.00 6848.79 4150.61 12330.48 00:20:20.353 =================================================================================================================== 00:20:20.353 Total : 18658.28 72.88 0.00 0.00 6848.79 4150.61 12330.48 00:20:20.353 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.353 00:20:20.353 Latency(us) 00:20:20.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.353 =================================================================================================================== 00:20:20.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.353 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.353 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.353 rmmod nvme_tcp 00:20:20.353 rmmod nvme_fabrics 00:20:20.353 rmmod nvme_keyring 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3096 ']' 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3096 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3096 ']' 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3096 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3096 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3096' 00:20:20.610 killing process with pid 3096 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3096 00:20:20.610 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3096 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.868 01:13:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.770 01:13:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.770 00:20:22.770 real 0m7.498s 00:20:22.770 user 0m11.494s 00:20:22.770 sys 0m2.344s 00:20:22.770 01:13:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:22.770 01:13:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:22.770 ************************************ 00:20:22.770 END TEST nvmf_multicontroller 00:20:22.770 ************************************ 00:20:23.028 01:13:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.028 01:13:38 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:23.028 01:13:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.028 01:13:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.028 01:13:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.028 ************************************ 00:20:23.028 START TEST nvmf_aer 00:20:23.028 ************************************ 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:23.028 * Looking for test storage... 00:20:23.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.028 01:13:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.558 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:25.559 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:25.559 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:25.559 Found net devices under 0000:09:00.0: cvl_0_0 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:25.559 Found net devices under 0000:09:00.1: cvl_0_1 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.559 01:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:25.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:20:25.559 00:20:25.559 --- 10.0.0.2 ping statistics --- 00:20:25.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.559 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:20:25.559 00:20:25.559 --- 10.0.0.1 ping statistics --- 00:20:25.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.559 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=5372 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 5372 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 5372 ']' 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.559 [2024-07-16 01:13:41.184761] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:25.559 [2024-07-16 01:13:41.184848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.559 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.559 [2024-07-16 01:13:41.250497] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.559 [2024-07-16 01:13:41.359427] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.559 [2024-07-16 01:13:41.359476] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.559 [2024-07-16 01:13:41.359504] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.559 [2024-07-16 01:13:41.359515] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.559 [2024-07-16 01:13:41.359524] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.559 [2024-07-16 01:13:41.359602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.559 [2024-07-16 01:13:41.359667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.559 [2024-07-16 01:13:41.359733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.559 [2024-07-16 01:13:41.359736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.559 [2024-07-16 01:13:41.523829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.559 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.817 Malloc0 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.817 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.818 [2024-07-16 01:13:41.577630] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.818 [ 00:20:25.818 { 00:20:25.818 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:25.818 "subtype": "Discovery", 00:20:25.818 "listen_addresses": [], 00:20:25.818 "allow_any_host": true, 00:20:25.818 "hosts": [] 00:20:25.818 }, 00:20:25.818 { 00:20:25.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.818 "subtype": "NVMe", 00:20:25.818 "listen_addresses": [ 00:20:25.818 { 00:20:25.818 "trtype": "TCP", 00:20:25.818 "adrfam": "IPv4", 00:20:25.818 "traddr": "10.0.0.2", 00:20:25.818 "trsvcid": "4420" 00:20:25.818 } 00:20:25.818 ], 00:20:25.818 "allow_any_host": true, 00:20:25.818 "hosts": [], 00:20:25.818 "serial_number": "SPDK00000000000001", 00:20:25.818 "model_number": "SPDK bdev Controller", 00:20:25.818 "max_namespaces": 2, 00:20:25.818 "min_cntlid": 1, 00:20:25.818 "max_cntlid": 65519, 00:20:25.818 "namespaces": [ 00:20:25.818 { 00:20:25.818 "nsid": 1, 00:20:25.818 "bdev_name": "Malloc0", 00:20:25.818 "name": "Malloc0", 00:20:25.818 "nguid": "9E66B37CDA2546A08B9C84769D469928", 00:20:25.818 "uuid": "9e66b37c-da25-46a0-8b9c-84769d469928" 00:20:25.818 } 00:20:25.818 ] 00:20:25.818 } 00:20:25.818 ] 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=5502 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:25.818 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.818 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:26.076 Malloc1 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.076 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:26.076 [ 00:20:26.076 { 00:20:26.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:26.076 "subtype": "Discovery", 00:20:26.076 "listen_addresses": [], 00:20:26.076 "allow_any_host": true, 00:20:26.076 "hosts": [] 00:20:26.076 }, 00:20:26.076 { 00:20:26.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.076 "subtype": "NVMe", 00:20:26.076 "listen_addresses": [ 00:20:26.076 { 00:20:26.076 "trtype": "TCP", 00:20:26.076 "adrfam": "IPv4", 00:20:26.076 "traddr": "10.0.0.2", 00:20:26.076 "trsvcid": "4420" 00:20:26.076 } 00:20:26.076 ], 00:20:26.076 "allow_any_host": true, 00:20:26.076 "hosts": [], 00:20:26.076 "serial_number": "SPDK00000000000001", 00:20:26.076 "model_number": "SPDK bdev Controller", 00:20:26.077 "max_namespaces": 2, 00:20:26.077 "min_cntlid": 1, 00:20:26.077 "max_cntlid": 65519, 00:20:26.077 "namespaces": [ 00:20:26.077 { 00:20:26.077 "nsid": 1, 00:20:26.077 "bdev_name": "Malloc0", 00:20:26.077 "name": "Malloc0", 00:20:26.077 "nguid": "9E66B37CDA2546A08B9C84769D469928", 00:20:26.077 "uuid": "9e66b37c-da25-46a0-8b9c-84769d469928" 00:20:26.077 }, 00:20:26.077 { 00:20:26.077 "nsid": 2, 00:20:26.077 "bdev_name": "Malloc1", 00:20:26.077 "name": "Malloc1", 00:20:26.077 "nguid": "5A862CBC0E99421590B353F14020A834", 00:20:26.077 "uuid": "5a862cbc-0e99-4215-90b3-53f14020a834" 00:20:26.077 } 00:20:26.077 ] 00:20:26.077 } 00:20:26.077 ] 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 5502 00:20:26.077 Asynchronous Event Request test 00:20:26.077 Attaching to 10.0.0.2 00:20:26.077 Attached to 10.0.0.2 00:20:26.077 Registering asynchronous event callbacks... 00:20:26.077 Starting namespace attribute notice tests for all controllers... 00:20:26.077 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:26.077 aer_cb - Changed Namespace 00:20:26.077 Cleaning up... 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.077 01:13:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.077 rmmod nvme_tcp 00:20:26.077 rmmod nvme_fabrics 00:20:26.077 rmmod nvme_keyring 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 5372 ']' 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 5372 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 5372 ']' 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 5372 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 5372 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 5372' 00:20:26.077 killing process with pid 5372 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 5372 00:20:26.077 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 5372 00:20:26.335 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:26.335 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:26.335 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:26.335 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.335 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.335 01:13:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.336 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.336 01:13:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.873 01:13:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.873 00:20:28.873 real 0m5.536s 00:20:28.873 user 0m4.305s 00:20:28.873 sys 0m2.039s 00:20:28.873 01:13:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:28.873 01:13:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.873 ************************************ 00:20:28.873 END TEST nvmf_aer 00:20:28.873 ************************************ 00:20:28.873 01:13:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:28.873 01:13:44 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:28.873 01:13:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:28.873 01:13:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.873 01:13:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:28.873 ************************************ 00:20:28.873 START TEST nvmf_async_init 00:20:28.873 ************************************ 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:28.873 * Looking for test storage... 00:20:28.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.873 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=420f9f1f300c46579a1c6c5bc1ec163e 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.874 01:13:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:30.774 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.774 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.774 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.774 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.774 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.774 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:30.775 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:30.775 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:30.775 Found net devices under 0000:09:00.0: cvl_0_0 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:30.775 Found net devices under 0000:09:00.1: cvl_0_1 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:30.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:20:30.775 00:20:30.775 --- 10.0.0.2 ping statistics --- 00:20:30.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.775 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:20:30.775 00:20:30.775 --- 10.0.0.1 ping statistics --- 00:20:30.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.775 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=7442 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 7442 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 7442 ']' 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.775 01:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:30.775 [2024-07-16 01:13:46.729102] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:30.775 [2024-07-16 01:13:46.729173] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.775 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.035 [2024-07-16 01:13:46.791222] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.035 [2024-07-16 01:13:46.899710] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.035 [2024-07-16 01:13:46.899797] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.035 [2024-07-16 01:13:46.899810] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.035 [2024-07-16 01:13:46.899836] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.035 [2024-07-16 01:13:46.899853] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.035 [2024-07-16 01:13:46.899891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.035 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.035 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:31.035 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.035 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.035 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.340 [2024-07-16 01:13:47.039337] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.340 null0 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 420f9f1f300c46579a1c6c5bc1ec163e 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.340 [2024-07-16 01:13:47.079563] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.340 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.597 nvme0n1 00:20:31.597 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.597 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:31.597 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.597 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.597 [ 00:20:31.597 { 00:20:31.597 "name": "nvme0n1", 00:20:31.597 "aliases": [ 00:20:31.597 "420f9f1f-300c-4657-9a1c-6c5bc1ec163e" 00:20:31.597 ], 00:20:31.597 "product_name": "NVMe disk", 00:20:31.597 "block_size": 512, 00:20:31.597 "num_blocks": 2097152, 00:20:31.597 "uuid": "420f9f1f-300c-4657-9a1c-6c5bc1ec163e", 00:20:31.597 "assigned_rate_limits": { 00:20:31.597 "rw_ios_per_sec": 0, 00:20:31.597 "rw_mbytes_per_sec": 0, 00:20:31.597 "r_mbytes_per_sec": 0, 00:20:31.597 "w_mbytes_per_sec": 0 00:20:31.597 }, 00:20:31.597 "claimed": false, 00:20:31.597 "zoned": false, 00:20:31.597 "supported_io_types": { 00:20:31.597 "read": true, 00:20:31.597 "write": true, 00:20:31.597 "unmap": false, 00:20:31.597 "flush": true, 00:20:31.597 "reset": true, 00:20:31.597 "nvme_admin": true, 00:20:31.597 "nvme_io": true, 00:20:31.597 "nvme_io_md": false, 00:20:31.597 "write_zeroes": true, 00:20:31.597 "zcopy": false, 00:20:31.597 "get_zone_info": false, 00:20:31.597 "zone_management": false, 00:20:31.597 "zone_append": false, 00:20:31.597 "compare": true, 00:20:31.597 "compare_and_write": true, 00:20:31.597 "abort": true, 00:20:31.597 "seek_hole": false, 00:20:31.597 "seek_data": false, 00:20:31.597 "copy": true, 00:20:31.597 "nvme_iov_md": false 00:20:31.597 }, 00:20:31.597 "memory_domains": [ 00:20:31.597 { 00:20:31.597 "dma_device_id": "system", 00:20:31.597 "dma_device_type": 1 00:20:31.597 } 00:20:31.597 ], 00:20:31.597 "driver_specific": { 00:20:31.597 "nvme": [ 00:20:31.597 { 00:20:31.597 "trid": { 00:20:31.597 "trtype": "TCP", 00:20:31.597 "adrfam": "IPv4", 00:20:31.597 "traddr": "10.0.0.2", 00:20:31.597 "trsvcid": "4420", 00:20:31.597 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:31.597 }, 00:20:31.597 "ctrlr_data": { 00:20:31.597 "cntlid": 1, 00:20:31.597 "vendor_id": "0x8086", 00:20:31.597 "model_number": "SPDK bdev Controller", 00:20:31.597 "serial_number": "00000000000000000000", 00:20:31.597 "firmware_revision": "24.09", 00:20:31.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.597 "oacs": { 00:20:31.597 "security": 0, 00:20:31.597 "format": 0, 00:20:31.597 "firmware": 0, 00:20:31.597 "ns_manage": 0 00:20:31.597 }, 00:20:31.597 "multi_ctrlr": true, 00:20:31.597 "ana_reporting": false 00:20:31.597 }, 00:20:31.597 "vs": { 00:20:31.597 "nvme_version": "1.3" 00:20:31.597 }, 00:20:31.597 "ns_data": { 00:20:31.597 "id": 1, 00:20:31.598 "can_share": true 00:20:31.598 } 00:20:31.598 } 00:20:31.598 ], 00:20:31.598 "mp_policy": "active_passive" 00:20:31.598 } 00:20:31.598 } 00:20:31.598 ] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.598 [2024-07-16 01:13:47.332633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:31.598 [2024-07-16 01:13:47.332720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164e2b0 (9): Bad file descriptor 00:20:31.598 [2024-07-16 01:13:47.465088] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.598 [ 00:20:31.598 { 00:20:31.598 "name": "nvme0n1", 00:20:31.598 "aliases": [ 00:20:31.598 "420f9f1f-300c-4657-9a1c-6c5bc1ec163e" 00:20:31.598 ], 00:20:31.598 "product_name": "NVMe disk", 00:20:31.598 "block_size": 512, 00:20:31.598 "num_blocks": 2097152, 00:20:31.598 "uuid": "420f9f1f-300c-4657-9a1c-6c5bc1ec163e", 00:20:31.598 "assigned_rate_limits": { 00:20:31.598 "rw_ios_per_sec": 0, 00:20:31.598 "rw_mbytes_per_sec": 0, 00:20:31.598 "r_mbytes_per_sec": 0, 00:20:31.598 "w_mbytes_per_sec": 0 00:20:31.598 }, 00:20:31.598 "claimed": false, 00:20:31.598 "zoned": false, 00:20:31.598 "supported_io_types": { 00:20:31.598 "read": true, 00:20:31.598 "write": true, 00:20:31.598 "unmap": false, 00:20:31.598 "flush": true, 00:20:31.598 "reset": true, 00:20:31.598 "nvme_admin": true, 00:20:31.598 "nvme_io": true, 00:20:31.598 "nvme_io_md": false, 00:20:31.598 "write_zeroes": true, 00:20:31.598 "zcopy": false, 00:20:31.598 "get_zone_info": false, 00:20:31.598 "zone_management": false, 00:20:31.598 "zone_append": false, 00:20:31.598 "compare": true, 00:20:31.598 "compare_and_write": true, 00:20:31.598 "abort": true, 00:20:31.598 "seek_hole": false, 00:20:31.598 "seek_data": false, 00:20:31.598 "copy": true, 00:20:31.598 "nvme_iov_md": false 00:20:31.598 }, 00:20:31.598 "memory_domains": [ 00:20:31.598 { 00:20:31.598 "dma_device_id": "system", 00:20:31.598 "dma_device_type": 1 00:20:31.598 } 00:20:31.598 ], 00:20:31.598 "driver_specific": { 00:20:31.598 "nvme": [ 00:20:31.598 { 00:20:31.598 "trid": { 00:20:31.598 "trtype": "TCP", 00:20:31.598 "adrfam": "IPv4", 00:20:31.598 "traddr": "10.0.0.2", 00:20:31.598 "trsvcid": "4420", 00:20:31.598 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:31.598 }, 00:20:31.598 "ctrlr_data": { 00:20:31.598 "cntlid": 2, 00:20:31.598 "vendor_id": "0x8086", 00:20:31.598 "model_number": "SPDK bdev Controller", 00:20:31.598 "serial_number": "00000000000000000000", 00:20:31.598 "firmware_revision": "24.09", 00:20:31.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.598 "oacs": { 00:20:31.598 "security": 0, 00:20:31.598 "format": 0, 00:20:31.598 "firmware": 0, 00:20:31.598 "ns_manage": 0 00:20:31.598 }, 00:20:31.598 "multi_ctrlr": true, 00:20:31.598 "ana_reporting": false 00:20:31.598 }, 00:20:31.598 "vs": { 00:20:31.598 "nvme_version": "1.3" 00:20:31.598 }, 00:20:31.598 "ns_data": { 00:20:31.598 "id": 1, 00:20:31.598 "can_share": true 00:20:31.598 } 00:20:31.598 } 00:20:31.598 ], 00:20:31.598 "mp_policy": "active_passive" 00:20:31.598 } 00:20:31.598 } 00:20:31.598 ] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.a0PyaXZYqX 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.a0PyaXZYqX 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.598 [2024-07-16 01:13:47.517281] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.598 [2024-07-16 01:13:47.517407] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a0PyaXZYqX 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.598 [2024-07-16 01:13:47.525317] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a0PyaXZYqX 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.598 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.598 [2024-07-16 01:13:47.533337] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.598 [2024-07-16 01:13:47.533400] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.856 nvme0n1 00:20:31.856 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.856 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:31.856 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.856 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.856 [ 00:20:31.856 { 00:20:31.856 "name": "nvme0n1", 00:20:31.856 "aliases": [ 00:20:31.856 "420f9f1f-300c-4657-9a1c-6c5bc1ec163e" 00:20:31.856 ], 00:20:31.856 "product_name": "NVMe disk", 00:20:31.856 "block_size": 512, 00:20:31.856 "num_blocks": 2097152, 00:20:31.856 "uuid": "420f9f1f-300c-4657-9a1c-6c5bc1ec163e", 00:20:31.856 "assigned_rate_limits": { 00:20:31.856 "rw_ios_per_sec": 0, 00:20:31.856 "rw_mbytes_per_sec": 0, 00:20:31.856 "r_mbytes_per_sec": 0, 00:20:31.856 "w_mbytes_per_sec": 0 00:20:31.856 }, 00:20:31.856 "claimed": false, 00:20:31.856 "zoned": false, 00:20:31.856 "supported_io_types": { 00:20:31.856 "read": true, 00:20:31.856 "write": true, 00:20:31.856 "unmap": false, 00:20:31.856 "flush": true, 00:20:31.856 "reset": true, 00:20:31.856 "nvme_admin": true, 00:20:31.856 "nvme_io": true, 00:20:31.856 "nvme_io_md": false, 00:20:31.856 "write_zeroes": true, 00:20:31.856 "zcopy": false, 00:20:31.856 "get_zone_info": false, 00:20:31.856 "zone_management": false, 00:20:31.857 "zone_append": false, 00:20:31.857 "compare": true, 00:20:31.857 "compare_and_write": true, 00:20:31.857 "abort": true, 00:20:31.857 "seek_hole": false, 00:20:31.857 "seek_data": false, 00:20:31.857 "copy": true, 00:20:31.857 "nvme_iov_md": false 00:20:31.857 }, 00:20:31.857 "memory_domains": [ 00:20:31.857 { 00:20:31.857 "dma_device_id": "system", 00:20:31.857 "dma_device_type": 1 00:20:31.857 } 00:20:31.857 ], 00:20:31.857 "driver_specific": { 00:20:31.857 "nvme": [ 00:20:31.857 { 00:20:31.857 "trid": { 00:20:31.857 "trtype": "TCP", 00:20:31.857 "adrfam": "IPv4", 00:20:31.857 "traddr": "10.0.0.2", 00:20:31.857 "trsvcid": "4421", 00:20:31.857 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:31.857 }, 00:20:31.857 "ctrlr_data": { 00:20:31.857 "cntlid": 3, 00:20:31.857 "vendor_id": "0x8086", 00:20:31.857 "model_number": "SPDK bdev Controller", 00:20:31.857 "serial_number": "00000000000000000000", 00:20:31.857 "firmware_revision": "24.09", 00:20:31.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.857 "oacs": { 00:20:31.857 "security": 0, 00:20:31.857 "format": 0, 00:20:31.857 "firmware": 0, 00:20:31.857 "ns_manage": 0 00:20:31.857 }, 00:20:31.857 "multi_ctrlr": true, 00:20:31.857 "ana_reporting": false 00:20:31.857 }, 00:20:31.857 "vs": { 00:20:31.857 "nvme_version": "1.3" 00:20:31.857 }, 00:20:31.857 "ns_data": { 00:20:31.857 "id": 1, 00:20:31.857 "can_share": true 00:20:31.857 } 00:20:31.857 } 00:20:31.857 ], 00:20:31.857 "mp_policy": "active_passive" 00:20:31.857 } 00:20:31.857 } 00:20:31.857 ] 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.a0PyaXZYqX 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.857 rmmod nvme_tcp 00:20:31.857 rmmod nvme_fabrics 00:20:31.857 rmmod nvme_keyring 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 7442 ']' 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 7442 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 7442 ']' 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 7442 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 7442 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 7442' 00:20:31.857 killing process with pid 7442 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 7442 00:20:31.857 [2024-07-16 01:13:47.724057] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:31.857 [2024-07-16 01:13:47.724088] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:31.857 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 7442 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.115 01:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.020 01:13:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.020 00:20:34.020 real 0m5.605s 00:20:34.020 user 0m2.122s 00:20:34.020 sys 0m1.867s 00:20:34.020 01:13:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.020 01:13:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 ************************************ 00:20:34.020 END TEST nvmf_async_init 00:20:34.020 ************************************ 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.279 01:13:50 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.279 ************************************ 00:20:34.279 START TEST dma 00:20:34.279 ************************************ 00:20:34.279 01:13:50 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:34.279 * Looking for test storage... 00:20:34.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:34.279 01:13:50 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.279 01:13:50 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.279 01:13:50 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.279 01:13:50 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.279 01:13:50 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.279 01:13:50 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.279 01:13:50 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.279 01:13:50 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:34.279 01:13:50 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.279 01:13:50 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.279 01:13:50 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:34.279 01:13:50 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:34.279 00:20:34.279 real 0m0.067s 00:20:34.279 user 0m0.031s 00:20:34.279 sys 0m0.041s 00:20:34.279 01:13:50 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.279 01:13:50 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:34.279 ************************************ 00:20:34.279 END TEST dma 00:20:34.279 ************************************ 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.279 01:13:50 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.279 01:13:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.279 ************************************ 00:20:34.279 START TEST nvmf_identify 00:20:34.279 ************************************ 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:34.279 * Looking for test storage... 00:20:34.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:34.279 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.280 01:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:36.181 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:36.181 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:36.181 Found net devices under 0000:09:00.0: cvl_0_0 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:36.181 Found net devices under 0000:09:00.1: cvl_0_1 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.181 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.439 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.439 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.439 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.439 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.439 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.439 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:20:36.440 00:20:36.440 --- 10.0.0.2 ping statistics --- 00:20:36.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.440 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:20:36.440 00:20:36.440 --- 10.0.0.1 ping statistics --- 00:20:36.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.440 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=9568 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 9568 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 9568 ']' 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.440 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.440 [2024-07-16 01:13:52.376626] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:36.440 [2024-07-16 01:13:52.376725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.440 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.697 [2024-07-16 01:13:52.446796] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.697 [2024-07-16 01:13:52.559067] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.697 [2024-07-16 01:13:52.559118] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.697 [2024-07-16 01:13:52.559146] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.697 [2024-07-16 01:13:52.559157] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.697 [2024-07-16 01:13:52.559168] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.697 [2024-07-16 01:13:52.559217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.697 [2024-07-16 01:13:52.562975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.697 [2024-07-16 01:13:52.563047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.697 [2024-07-16 01:13:52.566988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.956 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 [2024-07-16 01:13:52.699739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 Malloc0 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 [2024-07-16 01:13:52.780707] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.957 [ 00:20:36.957 { 00:20:36.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:36.957 "subtype": "Discovery", 00:20:36.957 "listen_addresses": [ 00:20:36.957 { 00:20:36.957 "trtype": "TCP", 00:20:36.957 "adrfam": "IPv4", 00:20:36.957 "traddr": "10.0.0.2", 00:20:36.957 "trsvcid": "4420" 00:20:36.957 } 00:20:36.957 ], 00:20:36.957 "allow_any_host": true, 00:20:36.957 "hosts": [] 00:20:36.957 }, 00:20:36.957 { 00:20:36.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.957 "subtype": "NVMe", 00:20:36.957 "listen_addresses": [ 00:20:36.957 { 00:20:36.957 "trtype": "TCP", 00:20:36.957 "adrfam": "IPv4", 00:20:36.957 "traddr": "10.0.0.2", 00:20:36.957 "trsvcid": "4420" 00:20:36.957 } 00:20:36.957 ], 00:20:36.957 "allow_any_host": true, 00:20:36.957 "hosts": [], 00:20:36.957 "serial_number": "SPDK00000000000001", 00:20:36.957 "model_number": "SPDK bdev Controller", 00:20:36.957 "max_namespaces": 32, 00:20:36.957 "min_cntlid": 1, 00:20:36.957 "max_cntlid": 65519, 00:20:36.957 "namespaces": [ 00:20:36.957 { 00:20:36.957 "nsid": 1, 00:20:36.957 "bdev_name": "Malloc0", 00:20:36.957 "name": "Malloc0", 00:20:36.957 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:36.957 "eui64": "ABCDEF0123456789", 00:20:36.957 "uuid": "762037be-5809-47c5-a959-787f306ed269" 00:20:36.957 } 00:20:36.957 ] 00:20:36.957 } 00:20:36.957 ] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.957 01:13:52 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:36.957 [2024-07-16 01:13:52.823976] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:36.957 [2024-07-16 01:13:52.824031] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9707 ] 00:20:36.957 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.957 [2024-07-16 01:13:52.859350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:36.957 [2024-07-16 01:13:52.859420] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:36.957 [2024-07-16 01:13:52.859430] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:36.957 [2024-07-16 01:13:52.859444] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:36.957 [2024-07-16 01:13:52.859454] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:36.957 [2024-07-16 01:13:52.863400] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:36.957 [2024-07-16 01:13:52.863477] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15686e0 0 00:20:36.957 [2024-07-16 01:13:52.870971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:36.957 [2024-07-16 01:13:52.871000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:36.957 [2024-07-16 01:13:52.871011] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:36.957 [2024-07-16 01:13:52.871017] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:36.957 [2024-07-16 01:13:52.871069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.871087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.871096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.957 [2024-07-16 01:13:52.871118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:36.957 [2024-07-16 01:13:52.871144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.957 [2024-07-16 01:13:52.877986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.957 [2024-07-16 01:13:52.878006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.957 [2024-07-16 01:13:52.878013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.878022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.957 [2024-07-16 01:13:52.878040] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:36.957 [2024-07-16 01:13:52.878055] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:36.957 [2024-07-16 01:13:52.878065] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:36.957 [2024-07-16 01:13:52.878089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.878097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.878104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.957 [2024-07-16 01:13:52.878115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.957 [2024-07-16 01:13:52.878138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.957 [2024-07-16 01:13:52.878275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.957 [2024-07-16 01:13:52.878291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.957 [2024-07-16 01:13:52.878301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.878309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.957 [2024-07-16 01:13:52.878318] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:36.957 [2024-07-16 01:13:52.878331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:36.957 [2024-07-16 01:13:52.878345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.878355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.878362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.957 [2024-07-16 01:13:52.878373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.957 [2024-07-16 01:13:52.878395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.957 [2024-07-16 01:13:52.878503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.957 [2024-07-16 01:13:52.878518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.957 [2024-07-16 01:13:52.878525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.957 [2024-07-16 01:13:52.878532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.957 [2024-07-16 01:13:52.878541] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:36.957 [2024-07-16 01:13:52.878558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:36.957 [2024-07-16 01:13:52.878573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.878580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.878591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.878602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-07-16 01:13:52.878624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.958 [2024-07-16 01:13:52.878726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.958 [2024-07-16 01:13:52.878741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.958 [2024-07-16 01:13:52.878749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.878759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.958 [2024-07-16 01:13:52.878769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:36.958 [2024-07-16 01:13:52.878786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.878795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.878804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.878816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-07-16 01:13:52.878839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.958 [2024-07-16 01:13:52.878938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.958 [2024-07-16 01:13:52.878962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.958 [2024-07-16 01:13:52.878974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.878981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.958 [2024-07-16 01:13:52.878992] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:36.958 [2024-07-16 01:13:52.879001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:36.958 [2024-07-16 01:13:52.879015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:36.958 [2024-07-16 01:13:52.879129] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:36.958 [2024-07-16 01:13:52.879137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:36.958 [2024-07-16 01:13:52.879154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.879194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-07-16 01:13:52.879217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.958 [2024-07-16 01:13:52.879341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.958 [2024-07-16 01:13:52.879358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.958 [2024-07-16 01:13:52.879381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.958 [2024-07-16 01:13:52.879398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:36.958 [2024-07-16 01:13:52.879421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.879450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-07-16 01:13:52.879471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.958 [2024-07-16 01:13:52.879580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.958 [2024-07-16 01:13:52.879596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.958 [2024-07-16 01:13:52.879606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.958 [2024-07-16 01:13:52.879622] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:36.958 [2024-07-16 01:13:52.879630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:36.958 [2024-07-16 01:13:52.879644] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:36.958 [2024-07-16 01:13:52.879662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:36.958 [2024-07-16 01:13:52.879681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.879700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-07-16 01:13:52.879722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.958 [2024-07-16 01:13:52.879897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:36.958 [2024-07-16 01:13:52.879913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:36.958 [2024-07-16 01:13:52.879921] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879929] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15686e0): datao=0, datal=4096, cccid=0 00:20:36.958 [2024-07-16 01:13:52.879942] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8540) on tqpair(0x15686e0): expected_datao=0, payload_size=4096 00:20:36.958 [2024-07-16 01:13:52.879963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879980] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.879989] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.958 [2024-07-16 01:13:52.880013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.958 [2024-07-16 01:13:52.880020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.958 [2024-07-16 01:13:52.880040] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:36.958 [2024-07-16 01:13:52.880049] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:36.958 [2024-07-16 01:13:52.880057] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:36.958 [2024-07-16 01:13:52.880066] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:36.958 [2024-07-16 01:13:52.880079] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:36.958 [2024-07-16 01:13:52.880088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:36.958 [2024-07-16 01:13:52.880104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:36.958 [2024-07-16 01:13:52.880124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.880151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:36.958 [2024-07-16 01:13:52.880174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.958 [2024-07-16 01:13:52.880300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.958 [2024-07-16 01:13:52.880331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.958 [2024-07-16 01:13:52.880338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:36.958 [2024-07-16 01:13:52.880360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.880383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.958 [2024-07-16 01:13:52.880393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15686e0) 00:20:36.958 [2024-07-16 01:13:52.880415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.958 [2024-07-16 01:13:52.880425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.958 [2024-07-16 01:13:52.880437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15686e0) 00:20:36.959 [2024-07-16 01:13:52.880446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.959 [2024-07-16 01:13:52.880456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.880462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.880484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:36.959 [2024-07-16 01:13:52.880493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.959 [2024-07-16 01:13:52.880501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:36.959 [2024-07-16 01:13:52.880523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:36.959 [2024-07-16 01:13:52.880537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.880544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15686e0) 00:20:36.959 [2024-07-16 01:13:52.880554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-07-16 01:13:52.880580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8540, cid 0, qid 0 00:20:36.959 [2024-07-16 01:13:52.880607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c86c0, cid 1, qid 0 00:20:36.959 [2024-07-16 01:13:52.880616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8840, cid 2, qid 0 00:20:36.959 [2024-07-16 01:13:52.880623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:36.959 [2024-07-16 01:13:52.880631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8b40, cid 4, qid 0 00:20:36.959 [2024-07-16 01:13:52.880784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.959 [2024-07-16 01:13:52.880802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.959 [2024-07-16 01:13:52.880810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.880817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8b40) on tqpair=0x15686e0 00:20:36.959 [2024-07-16 01:13:52.880827] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:36.959 [2024-07-16 01:13:52.880837] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:36.959 [2024-07-16 01:13:52.880857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.880868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15686e0) 00:20:36.959 [2024-07-16 01:13:52.880879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-07-16 01:13:52.880914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8b40, cid 4, qid 0 00:20:36.959 [2024-07-16 01:13:52.881049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:36.959 [2024-07-16 01:13:52.881065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:36.959 [2024-07-16 01:13:52.881072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.881081] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15686e0): datao=0, datal=4096, cccid=4 00:20:36.959 [2024-07-16 01:13:52.881094] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8b40) on tqpair(0x15686e0): expected_datao=0, payload_size=4096 00:20:36.959 [2024-07-16 01:13:52.881105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.881128] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.881137] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.924968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.959 [2024-07-16 01:13:52.924987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.959 [2024-07-16 01:13:52.924995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8b40) on tqpair=0x15686e0 00:20:36.959 [2024-07-16 01:13:52.925023] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:36.959 [2024-07-16 01:13:52.925069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15686e0) 00:20:36.959 [2024-07-16 01:13:52.925092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-07-16 01:13:52.925104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15686e0) 00:20:36.959 [2024-07-16 01:13:52.925130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.959 [2024-07-16 01:13:52.925160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8b40, cid 4, qid 0 00:20:36.959 [2024-07-16 01:13:52.925187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cc0, cid 5, qid 0 00:20:36.959 [2024-07-16 01:13:52.925350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:36.959 [2024-07-16 01:13:52.925368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:36.959 [2024-07-16 01:13:52.925376] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925382] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15686e0): datao=0, datal=1024, cccid=4 00:20:36.959 [2024-07-16 01:13:52.925390] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8b40) on tqpair(0x15686e0): expected_datao=0, payload_size=1024 00:20:36.959 [2024-07-16 01:13:52.925398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925408] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925415] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:36.959 [2024-07-16 01:13:52.925433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:36.959 [2024-07-16 01:13:52.925440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:36.959 [2024-07-16 01:13:52.925446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8cc0) on tqpair=0x15686e0 00:20:37.221 [2024-07-16 01:13:52.966051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.221 [2024-07-16 01:13:52.966072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.221 [2024-07-16 01:13:52.966082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8b40) on tqpair=0x15686e0 00:20:37.221 [2024-07-16 01:13:52.966116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15686e0) 00:20:37.221 [2024-07-16 01:13:52.966139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.221 [2024-07-16 01:13:52.966172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8b40, cid 4, qid 0 00:20:37.221 [2024-07-16 01:13:52.966285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.221 [2024-07-16 01:13:52.966301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.221 [2024-07-16 01:13:52.966311] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966322] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15686e0): datao=0, datal=3072, cccid=4 00:20:37.221 [2024-07-16 01:13:52.966335] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8b40) on tqpair(0x15686e0): expected_datao=0, payload_size=3072 00:20:37.221 [2024-07-16 01:13:52.966347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966360] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966368] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.221 [2024-07-16 01:13:52.966390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.221 [2024-07-16 01:13:52.966397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8b40) on tqpair=0x15686e0 00:20:37.221 [2024-07-16 01:13:52.966421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15686e0) 00:20:37.221 [2024-07-16 01:13:52.966447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.221 [2024-07-16 01:13:52.966479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8b40, cid 4, qid 0 00:20:37.221 [2024-07-16 01:13:52.966595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.221 [2024-07-16 01:13:52.966611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.221 [2024-07-16 01:13:52.966618] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966625] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15686e0): datao=0, datal=8, cccid=4 00:20:37.221 [2024-07-16 01:13:52.966632] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8b40) on tqpair(0x15686e0): expected_datao=0, payload_size=8 00:20:37.221 [2024-07-16 01:13:52.966640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966650] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:52.966657] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:53.007974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.221 [2024-07-16 01:13:53.007993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.221 [2024-07-16 01:13:53.008016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.221 [2024-07-16 01:13:53.008024] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8b40) on tqpair=0x15686e0 00:20:37.221 ===================================================== 00:20:37.221 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:37.221 ===================================================== 00:20:37.221 Controller Capabilities/Features 00:20:37.221 ================================ 00:20:37.221 Vendor ID: 0000 00:20:37.221 Subsystem Vendor ID: 0000 00:20:37.221 Serial Number: .................... 00:20:37.221 Model Number: ........................................ 00:20:37.221 Firmware Version: 24.09 00:20:37.221 Recommended Arb Burst: 0 00:20:37.221 IEEE OUI Identifier: 00 00 00 00:20:37.221 Multi-path I/O 00:20:37.221 May have multiple subsystem ports: No 00:20:37.221 May have multiple controllers: No 00:20:37.221 Associated with SR-IOV VF: No 00:20:37.221 Max Data Transfer Size: 131072 00:20:37.221 Max Number of Namespaces: 0 00:20:37.221 Max Number of I/O Queues: 1024 00:20:37.221 NVMe Specification Version (VS): 1.3 00:20:37.221 NVMe Specification Version (Identify): 1.3 00:20:37.221 Maximum Queue Entries: 128 00:20:37.221 Contiguous Queues Required: Yes 00:20:37.221 Arbitration Mechanisms Supported 00:20:37.221 Weighted Round Robin: Not Supported 00:20:37.221 Vendor Specific: Not Supported 00:20:37.221 Reset Timeout: 15000 ms 00:20:37.221 Doorbell Stride: 4 bytes 00:20:37.221 NVM Subsystem Reset: Not Supported 00:20:37.221 Command Sets Supported 00:20:37.221 NVM Command Set: Supported 00:20:37.222 Boot Partition: Not Supported 00:20:37.222 Memory Page Size Minimum: 4096 bytes 00:20:37.222 Memory Page Size Maximum: 4096 bytes 00:20:37.222 Persistent Memory Region: Not Supported 00:20:37.222 Optional Asynchronous Events Supported 00:20:37.222 Namespace Attribute Notices: Not Supported 00:20:37.222 Firmware Activation Notices: Not Supported 00:20:37.222 ANA Change Notices: Not Supported 00:20:37.222 PLE Aggregate Log Change Notices: Not Supported 00:20:37.222 LBA Status Info Alert Notices: Not Supported 00:20:37.222 EGE Aggregate Log Change Notices: Not Supported 00:20:37.222 Normal NVM Subsystem Shutdown event: Not Supported 00:20:37.222 Zone Descriptor Change Notices: Not Supported 00:20:37.222 Discovery Log Change Notices: Supported 00:20:37.222 Controller Attributes 00:20:37.222 128-bit Host Identifier: Not Supported 00:20:37.222 Non-Operational Permissive Mode: Not Supported 00:20:37.222 NVM Sets: Not Supported 00:20:37.222 Read Recovery Levels: Not Supported 00:20:37.222 Endurance Groups: Not Supported 00:20:37.222 Predictable Latency Mode: Not Supported 00:20:37.222 Traffic Based Keep ALive: Not Supported 00:20:37.222 Namespace Granularity: Not Supported 00:20:37.222 SQ Associations: Not Supported 00:20:37.222 UUID List: Not Supported 00:20:37.222 Multi-Domain Subsystem: Not Supported 00:20:37.222 Fixed Capacity Management: Not Supported 00:20:37.222 Variable Capacity Management: Not Supported 00:20:37.222 Delete Endurance Group: Not Supported 00:20:37.222 Delete NVM Set: Not Supported 00:20:37.222 Extended LBA Formats Supported: Not Supported 00:20:37.222 Flexible Data Placement Supported: Not Supported 00:20:37.222 00:20:37.222 Controller Memory Buffer Support 00:20:37.222 ================================ 00:20:37.222 Supported: No 00:20:37.222 00:20:37.222 Persistent Memory Region Support 00:20:37.222 ================================ 00:20:37.222 Supported: No 00:20:37.222 00:20:37.222 Admin Command Set Attributes 00:20:37.222 ============================ 00:20:37.222 Security Send/Receive: Not Supported 00:20:37.222 Format NVM: Not Supported 00:20:37.222 Firmware Activate/Download: Not Supported 00:20:37.222 Namespace Management: Not Supported 00:20:37.222 Device Self-Test: Not Supported 00:20:37.222 Directives: Not Supported 00:20:37.222 NVMe-MI: Not Supported 00:20:37.222 Virtualization Management: Not Supported 00:20:37.222 Doorbell Buffer Config: Not Supported 00:20:37.222 Get LBA Status Capability: Not Supported 00:20:37.222 Command & Feature Lockdown Capability: Not Supported 00:20:37.222 Abort Command Limit: 1 00:20:37.222 Async Event Request Limit: 4 00:20:37.222 Number of Firmware Slots: N/A 00:20:37.222 Firmware Slot 1 Read-Only: N/A 00:20:37.222 Firmware Activation Without Reset: N/A 00:20:37.222 Multiple Update Detection Support: N/A 00:20:37.222 Firmware Update Granularity: No Information Provided 00:20:37.222 Per-Namespace SMART Log: No 00:20:37.222 Asymmetric Namespace Access Log Page: Not Supported 00:20:37.222 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:37.222 Command Effects Log Page: Not Supported 00:20:37.222 Get Log Page Extended Data: Supported 00:20:37.222 Telemetry Log Pages: Not Supported 00:20:37.222 Persistent Event Log Pages: Not Supported 00:20:37.222 Supported Log Pages Log Page: May Support 00:20:37.222 Commands Supported & Effects Log Page: Not Supported 00:20:37.222 Feature Identifiers & Effects Log Page:May Support 00:20:37.222 NVMe-MI Commands & Effects Log Page: May Support 00:20:37.222 Data Area 4 for Telemetry Log: Not Supported 00:20:37.222 Error Log Page Entries Supported: 128 00:20:37.222 Keep Alive: Not Supported 00:20:37.222 00:20:37.222 NVM Command Set Attributes 00:20:37.222 ========================== 00:20:37.222 Submission Queue Entry Size 00:20:37.222 Max: 1 00:20:37.222 Min: 1 00:20:37.222 Completion Queue Entry Size 00:20:37.222 Max: 1 00:20:37.222 Min: 1 00:20:37.222 Number of Namespaces: 0 00:20:37.222 Compare Command: Not Supported 00:20:37.222 Write Uncorrectable Command: Not Supported 00:20:37.222 Dataset Management Command: Not Supported 00:20:37.222 Write Zeroes Command: Not Supported 00:20:37.222 Set Features Save Field: Not Supported 00:20:37.222 Reservations: Not Supported 00:20:37.222 Timestamp: Not Supported 00:20:37.222 Copy: Not Supported 00:20:37.222 Volatile Write Cache: Not Present 00:20:37.222 Atomic Write Unit (Normal): 1 00:20:37.222 Atomic Write Unit (PFail): 1 00:20:37.222 Atomic Compare & Write Unit: 1 00:20:37.222 Fused Compare & Write: Supported 00:20:37.222 Scatter-Gather List 00:20:37.222 SGL Command Set: Supported 00:20:37.222 SGL Keyed: Supported 00:20:37.222 SGL Bit Bucket Descriptor: Not Supported 00:20:37.222 SGL Metadata Pointer: Not Supported 00:20:37.222 Oversized SGL: Not Supported 00:20:37.222 SGL Metadata Address: Not Supported 00:20:37.222 SGL Offset: Supported 00:20:37.222 Transport SGL Data Block: Not Supported 00:20:37.222 Replay Protected Memory Block: Not Supported 00:20:37.222 00:20:37.222 Firmware Slot Information 00:20:37.222 ========================= 00:20:37.222 Active slot: 0 00:20:37.222 00:20:37.222 00:20:37.222 Error Log 00:20:37.222 ========= 00:20:37.222 00:20:37.222 Active Namespaces 00:20:37.222 ================= 00:20:37.222 Discovery Log Page 00:20:37.222 ================== 00:20:37.222 Generation Counter: 2 00:20:37.222 Number of Records: 2 00:20:37.222 Record Format: 0 00:20:37.222 00:20:37.222 Discovery Log Entry 0 00:20:37.222 ---------------------- 00:20:37.222 Transport Type: 3 (TCP) 00:20:37.222 Address Family: 1 (IPv4) 00:20:37.222 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:37.222 Entry Flags: 00:20:37.222 Duplicate Returned Information: 1 00:20:37.222 Explicit Persistent Connection Support for Discovery: 1 00:20:37.222 Transport Requirements: 00:20:37.222 Secure Channel: Not Required 00:20:37.222 Port ID: 0 (0x0000) 00:20:37.222 Controller ID: 65535 (0xffff) 00:20:37.222 Admin Max SQ Size: 128 00:20:37.222 Transport Service Identifier: 4420 00:20:37.222 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:37.222 Transport Address: 10.0.0.2 00:20:37.222 Discovery Log Entry 1 00:20:37.222 ---------------------- 00:20:37.222 Transport Type: 3 (TCP) 00:20:37.222 Address Family: 1 (IPv4) 00:20:37.222 Subsystem Type: 2 (NVM Subsystem) 00:20:37.222 Entry Flags: 00:20:37.222 Duplicate Returned Information: 0 00:20:37.222 Explicit Persistent Connection Support for Discovery: 0 00:20:37.222 Transport Requirements: 00:20:37.222 Secure Channel: Not Required 00:20:37.222 Port ID: 0 (0x0000) 00:20:37.222 Controller ID: 65535 (0xffff) 00:20:37.222 Admin Max SQ Size: 128 00:20:37.222 Transport Service Identifier: 4420 00:20:37.222 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:37.222 Transport Address: 10.0.0.2 [2024-07-16 01:13:53.008150] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:37.222 [2024-07-16 01:13:53.008175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8540) on tqpair=0x15686e0 00:20:37.222 [2024-07-16 01:13:53.008190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.222 [2024-07-16 01:13:53.008200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c86c0) on tqpair=0x15686e0 00:20:37.222 [2024-07-16 01:13:53.008207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.222 [2024-07-16 01:13:53.008216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c8840) on tqpair=0x15686e0 00:20:37.222 [2024-07-16 01:13:53.008223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.222 [2024-07-16 01:13:53.008231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.222 [2024-07-16 01:13:53.008253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.222 [2024-07-16 01:13:53.008268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.222 [2024-07-16 01:13:53.008276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.222 [2024-07-16 01:13:53.008282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.222 [2024-07-16 01:13:53.008293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.222 [2024-07-16 01:13:53.008334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.222 [2024-07-16 01:13:53.008460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.222 [2024-07-16 01:13:53.008476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.222 [2024-07-16 01:13:53.008483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.222 [2024-07-16 01:13:53.008490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.222 [2024-07-16 01:13:53.008503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.222 [2024-07-16 01:13:53.008511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.222 [2024-07-16 01:13:53.008517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.222 [2024-07-16 01:13:53.008532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.222 [2024-07-16 01:13:53.008562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.008677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.008693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.008700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.008707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.008717] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:37.223 [2024-07-16 01:13:53.008727] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:37.223 [2024-07-16 01:13:53.008745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.008755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.008762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.008773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.008794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.008894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.008909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.008916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.008923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.008943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.008962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.008970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.008982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.009004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.009112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.009128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.009135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.009160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.009188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.009210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.009300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.009316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.009323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009333] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.009351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.009385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.009408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.009501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.009516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.009523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.009551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.009579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.009603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.009702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.009717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.009724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.009750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.009778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.009799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.009906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.009921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.009929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.009961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.009980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.009991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.010013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.010112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.010128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.010135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.010160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.010192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.010214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.010313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.010328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.010335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.010360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.010388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.010410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.010498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.010514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.010521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.010549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.010577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.010600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.010689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.010705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.010712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.010740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.223 [2024-07-16 01:13:53.010767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.223 [2024-07-16 01:13:53.010791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.223 [2024-07-16 01:13:53.010881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.223 [2024-07-16 01:13:53.010896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.223 [2024-07-16 01:13:53.010903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.223 [2024-07-16 01:13:53.010932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.223 [2024-07-16 01:13:53.010947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.010973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.010999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.011089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.011105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.011112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.011140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.011166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.011191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.011288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.011303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.011310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.011336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.011364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.011385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.011484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.011499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.011506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.011532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.011560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.011581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.011678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.011694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.011701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.011726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.011754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.011779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.011876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.011895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.011903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.011929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.011946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.011966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.011991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.012095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.012110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.012117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.012143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.012170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.012192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.012293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.012309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.012316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.012341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.012369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.012390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.012486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.012501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.012508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.012534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.012562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.012588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.012686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.012701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.012709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.012736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.012764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.012787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.012877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.012892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.012899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.012927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.012942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15686e0) 00:20:37.224 [2024-07-16 01:13:53.012953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.224 [2024-07-16 01:13:53.016989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c89c0, cid 3, qid 0 00:20:37.224 [2024-07-16 01:13:53.017101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.224 [2024-07-16 01:13:53.017117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.224 [2024-07-16 01:13:53.017124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.224 [2024-07-16 01:13:53.017134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15c89c0) on tqpair=0x15686e0 00:20:37.224 [2024-07-16 01:13:53.017149] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:20:37.224 00:20:37.224 01:13:53 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:37.224 [2024-07-16 01:13:53.052908] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:37.224 [2024-07-16 01:13:53.052966] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9709 ] 00:20:37.224 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.224 [2024-07-16 01:13:53.087782] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:37.224 [2024-07-16 01:13:53.087837] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:37.224 [2024-07-16 01:13:53.087847] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:37.224 [2024-07-16 01:13:53.087860] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:37.224 [2024-07-16 01:13:53.087873] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:37.225 [2024-07-16 01:13:53.088385] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:37.225 [2024-07-16 01:13:53.088438] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16d16e0 0 00:20:37.225 [2024-07-16 01:13:53.102967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:37.225 [2024-07-16 01:13:53.102989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:37.225 [2024-07-16 01:13:53.102998] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:37.225 [2024-07-16 01:13:53.103004] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:37.225 [2024-07-16 01:13:53.103052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.103064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.103071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.103086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:37.225 [2024-07-16 01:13:53.103112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.108967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.225 [2024-07-16 01:13:53.108985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.225 [2024-07-16 01:13:53.108993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.225 [2024-07-16 01:13:53.109033] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:37.225 [2024-07-16 01:13:53.109045] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:37.225 [2024-07-16 01:13:53.109054] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:37.225 [2024-07-16 01:13:53.109072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.109100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.225 [2024-07-16 01:13:53.109124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.109271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.225 [2024-07-16 01:13:53.109283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.225 [2024-07-16 01:13:53.109290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.225 [2024-07-16 01:13:53.109305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:37.225 [2024-07-16 01:13:53.109318] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:37.225 [2024-07-16 01:13:53.109330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.109355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.225 [2024-07-16 01:13:53.109376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.109473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.225 [2024-07-16 01:13:53.109488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.225 [2024-07-16 01:13:53.109495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.225 [2024-07-16 01:13:53.109510] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:37.225 [2024-07-16 01:13:53.109524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:37.225 [2024-07-16 01:13:53.109536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.109561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.225 [2024-07-16 01:13:53.109582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.109719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.225 [2024-07-16 01:13:53.109731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.225 [2024-07-16 01:13:53.109738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.225 [2024-07-16 01:13:53.109752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:37.225 [2024-07-16 01:13:53.109769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.109794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.225 [2024-07-16 01:13:53.109815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.109905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.225 [2024-07-16 01:13:53.109920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.225 [2024-07-16 01:13:53.109927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.109934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.225 [2024-07-16 01:13:53.109941] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:37.225 [2024-07-16 01:13:53.109950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:37.225 [2024-07-16 01:13:53.109976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:37.225 [2024-07-16 01:13:53.110088] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:37.225 [2024-07-16 01:13:53.110098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:37.225 [2024-07-16 01:13:53.110110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.110117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.110124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.110135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.225 [2024-07-16 01:13:53.110164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.110308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.225 [2024-07-16 01:13:53.110323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.225 [2024-07-16 01:13:53.110330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.110337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.225 [2024-07-16 01:13:53.110345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:37.225 [2024-07-16 01:13:53.110362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.110371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.110377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.110388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.225 [2024-07-16 01:13:53.110409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.110509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.225 [2024-07-16 01:13:53.110524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.225 [2024-07-16 01:13:53.110531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.110537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.225 [2024-07-16 01:13:53.110545] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:37.225 [2024-07-16 01:13:53.110553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:37.225 [2024-07-16 01:13:53.110566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:37.225 [2024-07-16 01:13:53.110580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:37.225 [2024-07-16 01:13:53.110594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.225 [2024-07-16 01:13:53.110602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.225 [2024-07-16 01:13:53.110613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.225 [2024-07-16 01:13:53.110635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.225 [2024-07-16 01:13:53.110764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.225 [2024-07-16 01:13:53.110779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.226 [2024-07-16 01:13:53.110786] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.226 [2024-07-16 01:13:53.110792] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=4096, cccid=0 00:20:37.226 [2024-07-16 01:13:53.110800] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731540) on tqpair(0x16d16e0): expected_datao=0, payload_size=4096 00:20:37.226 [2024-07-16 01:13:53.110807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.226 [2024-07-16 01:13:53.110825] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.110834] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.227 [2024-07-16 01:13:53.151127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.227 [2024-07-16 01:13:53.151135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.227 [2024-07-16 01:13:53.151159] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:37.227 [2024-07-16 01:13:53.151168] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:37.227 [2024-07-16 01:13:53.151175] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:37.227 [2024-07-16 01:13:53.151182] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:37.227 [2024-07-16 01:13:53.151190] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:37.227 [2024-07-16 01:13:53.151198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.151213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.151230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.151257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:37.227 [2024-07-16 01:13:53.151280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.227 [2024-07-16 01:13:53.151371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.227 [2024-07-16 01:13:53.151383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.227 [2024-07-16 01:13:53.151390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.227 [2024-07-16 01:13:53.151407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.151430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.227 [2024-07-16 01:13:53.151440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.151462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.227 [2024-07-16 01:13:53.151472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.151494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.227 [2024-07-16 01:13:53.151503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.151525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.227 [2024-07-16 01:13:53.151537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.151556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.151569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.151587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.227 [2024-07-16 01:13:53.151624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731540, cid 0, qid 0 00:20:37.227 [2024-07-16 01:13:53.151635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17316c0, cid 1, qid 0 00:20:37.227 [2024-07-16 01:13:53.151643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731840, cid 2, qid 0 00:20:37.227 [2024-07-16 01:13:53.151650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.227 [2024-07-16 01:13:53.151658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731b40, cid 4, qid 0 00:20:37.227 [2024-07-16 01:13:53.151861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.227 [2024-07-16 01:13:53.151877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.227 [2024-07-16 01:13:53.151884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731b40) on tqpair=0x16d16e0 00:20:37.227 [2024-07-16 01:13:53.151899] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:37.227 [2024-07-16 01:13:53.151908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.151927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.151940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.151951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.151975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.151986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:37.227 [2024-07-16 01:13:53.152008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731b40, cid 4, qid 0 00:20:37.227 [2024-07-16 01:13:53.152150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.227 [2024-07-16 01:13:53.152165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.227 [2024-07-16 01:13:53.152172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731b40) on tqpair=0x16d16e0 00:20:37.227 [2024-07-16 01:13:53.152249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.152270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.152286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.152304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.227 [2024-07-16 01:13:53.152330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731b40, cid 4, qid 0 00:20:37.227 [2024-07-16 01:13:53.152479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.227 [2024-07-16 01:13:53.152494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.227 [2024-07-16 01:13:53.152501] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152508] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=4096, cccid=4 00:20:37.227 [2024-07-16 01:13:53.152515] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731b40) on tqpair(0x16d16e0): expected_datao=0, payload_size=4096 00:20:37.227 [2024-07-16 01:13:53.152523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152533] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152540] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.227 [2024-07-16 01:13:53.152584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.227 [2024-07-16 01:13:53.152591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731b40) on tqpair=0x16d16e0 00:20:37.227 [2024-07-16 01:13:53.152613] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:37.227 [2024-07-16 01:13:53.152638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.152656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:37.227 [2024-07-16 01:13:53.152670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d16e0) 00:20:37.227 [2024-07-16 01:13:53.152689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.227 [2024-07-16 01:13:53.152710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731b40, cid 4, qid 0 00:20:37.227 [2024-07-16 01:13:53.152831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.227 [2024-07-16 01:13:53.152846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.227 [2024-07-16 01:13:53.152853] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152859] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=4096, cccid=4 00:20:37.227 [2024-07-16 01:13:53.152867] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731b40) on tqpair(0x16d16e0): expected_datao=0, payload_size=4096 00:20:37.227 [2024-07-16 01:13:53.152874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152884] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.227 [2024-07-16 01:13:53.152903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.227 [2024-07-16 01:13:53.152913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.227 [2024-07-16 01:13:53.152920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.152927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731b40) on tqpair=0x16d16e0 00:20:37.228 [2024-07-16 01:13:53.152949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.156983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.157040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.157063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731b40, cid 4, qid 0 00:20:37.228 [2024-07-16 01:13:53.157188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.228 [2024-07-16 01:13:53.157203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.228 [2024-07-16 01:13:53.157210] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157216] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=4096, cccid=4 00:20:37.228 [2024-07-16 01:13:53.157223] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731b40) on tqpair(0x16d16e0): expected_datao=0, payload_size=4096 00:20:37.228 [2024-07-16 01:13:53.157231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157241] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157249] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.228 [2024-07-16 01:13:53.157270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.228 [2024-07-16 01:13:53.157277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731b40) on tqpair=0x16d16e0 00:20:37.228 [2024-07-16 01:13:53.157298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157366] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:37.228 [2024-07-16 01:13:53.157374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:37.228 [2024-07-16 01:13:53.157383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:37.228 [2024-07-16 01:13:53.157402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.157422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.157433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.157456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.228 [2024-07-16 01:13:53.157485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731b40, cid 4, qid 0 00:20:37.228 [2024-07-16 01:13:53.157513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731cc0, cid 5, qid 0 00:20:37.228 [2024-07-16 01:13:53.157752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.228 [2024-07-16 01:13:53.157765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.228 [2024-07-16 01:13:53.157772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731b40) on tqpair=0x16d16e0 00:20:37.228 [2024-07-16 01:13:53.157789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.228 [2024-07-16 01:13:53.157798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.228 [2024-07-16 01:13:53.157805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731cc0) on tqpair=0x16d16e0 00:20:37.228 [2024-07-16 01:13:53.157827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.157836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.157847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.157868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731cc0, cid 5, qid 0 00:20:37.228 [2024-07-16 01:13:53.158009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.228 [2024-07-16 01:13:53.158025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.228 [2024-07-16 01:13:53.158032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731cc0) on tqpair=0x16d16e0 00:20:37.228 [2024-07-16 01:13:53.158055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.158075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.158096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731cc0, cid 5, qid 0 00:20:37.228 [2024-07-16 01:13:53.158198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.228 [2024-07-16 01:13:53.158213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.228 [2024-07-16 01:13:53.158220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731cc0) on tqpair=0x16d16e0 00:20:37.228 [2024-07-16 01:13:53.158241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.158261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.158282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731cc0, cid 5, qid 0 00:20:37.228 [2024-07-16 01:13:53.158372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.228 [2024-07-16 01:13:53.158384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.228 [2024-07-16 01:13:53.158391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731cc0) on tqpair=0x16d16e0 00:20:37.228 [2024-07-16 01:13:53.158422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.158447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.158460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.158477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.158490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.158506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.158518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16d16e0) 00:20:37.228 [2024-07-16 01:13:53.158535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.228 [2024-07-16 01:13:53.158557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731cc0, cid 5, qid 0 00:20:37.228 [2024-07-16 01:13:53.158568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731b40, cid 4, qid 0 00:20:37.228 [2024-07-16 01:13:53.158576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731e40, cid 6, qid 0 00:20:37.228 [2024-07-16 01:13:53.158584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731fc0, cid 7, qid 0 00:20:37.228 [2024-07-16 01:13:53.158785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.228 [2024-07-16 01:13:53.158801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.228 [2024-07-16 01:13:53.158808] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158814] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=8192, cccid=5 00:20:37.228 [2024-07-16 01:13:53.158822] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731cc0) on tqpair(0x16d16e0): expected_datao=0, payload_size=8192 00:20:37.228 [2024-07-16 01:13:53.158829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158847] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158856] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.228 [2024-07-16 01:13:53.158878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.228 [2024-07-16 01:13:53.158885] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158891] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=512, cccid=4 00:20:37.228 [2024-07-16 01:13:53.158899] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731b40) on tqpair(0x16d16e0): expected_datao=0, payload_size=512 00:20:37.228 [2024-07-16 01:13:53.158906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158915] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158922] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.228 [2024-07-16 01:13:53.158939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.228 [2024-07-16 01:13:53.158946] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158952] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=512, cccid=6 00:20:37.228 [2024-07-16 01:13:53.158974] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731e40) on tqpair(0x16d16e0): expected_datao=0, payload_size=512 00:20:37.228 [2024-07-16 01:13:53.158982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.228 [2024-07-16 01:13:53.158992] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.158999] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.229 [2024-07-16 01:13:53.159016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.229 [2024-07-16 01:13:53.159023] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159029] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d16e0): datao=0, datal=4096, cccid=7 00:20:37.229 [2024-07-16 01:13:53.159037] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1731fc0) on tqpair(0x16d16e0): expected_datao=0, payload_size=4096 00:20:37.229 [2024-07-16 01:13:53.159044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159053] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159061] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.229 [2024-07-16 01:13:53.159082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.229 [2024-07-16 01:13:53.159089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731cc0) on tqpair=0x16d16e0 00:20:37.229 [2024-07-16 01:13:53.159114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.229 [2024-07-16 01:13:53.159125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.229 [2024-07-16 01:13:53.159132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731b40) on tqpair=0x16d16e0 00:20:37.229 [2024-07-16 01:13:53.159153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.229 [2024-07-16 01:13:53.159164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.229 [2024-07-16 01:13:53.159170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731e40) on tqpair=0x16d16e0 00:20:37.229 [2024-07-16 01:13:53.159187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.229 [2024-07-16 01:13:53.159197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.229 [2024-07-16 01:13:53.159203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.229 [2024-07-16 01:13:53.159210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731fc0) on tqpair=0x16d16e0 00:20:37.229 ===================================================== 00:20:37.229 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.229 ===================================================== 00:20:37.229 Controller Capabilities/Features 00:20:37.229 ================================ 00:20:37.229 Vendor ID: 8086 00:20:37.229 Subsystem Vendor ID: 8086 00:20:37.229 Serial Number: SPDK00000000000001 00:20:37.229 Model Number: SPDK bdev Controller 00:20:37.229 Firmware Version: 24.09 00:20:37.229 Recommended Arb Burst: 6 00:20:37.229 IEEE OUI Identifier: e4 d2 5c 00:20:37.229 Multi-path I/O 00:20:37.229 May have multiple subsystem ports: Yes 00:20:37.229 May have multiple controllers: Yes 00:20:37.229 Associated with SR-IOV VF: No 00:20:37.229 Max Data Transfer Size: 131072 00:20:37.229 Max Number of Namespaces: 32 00:20:37.229 Max Number of I/O Queues: 127 00:20:37.229 NVMe Specification Version (VS): 1.3 00:20:37.229 NVMe Specification Version (Identify): 1.3 00:20:37.229 Maximum Queue Entries: 128 00:20:37.229 Contiguous Queues Required: Yes 00:20:37.229 Arbitration Mechanisms Supported 00:20:37.229 Weighted Round Robin: Not Supported 00:20:37.229 Vendor Specific: Not Supported 00:20:37.229 Reset Timeout: 15000 ms 00:20:37.229 Doorbell Stride: 4 bytes 00:20:37.229 NVM Subsystem Reset: Not Supported 00:20:37.229 Command Sets Supported 00:20:37.229 NVM Command Set: Supported 00:20:37.229 Boot Partition: Not Supported 00:20:37.229 Memory Page Size Minimum: 4096 bytes 00:20:37.229 Memory Page Size Maximum: 4096 bytes 00:20:37.229 Persistent Memory Region: Not Supported 00:20:37.229 Optional Asynchronous Events Supported 00:20:37.229 Namespace Attribute Notices: Supported 00:20:37.229 Firmware Activation Notices: Not Supported 00:20:37.229 ANA Change Notices: Not Supported 00:20:37.229 PLE Aggregate Log Change Notices: Not Supported 00:20:37.229 LBA Status Info Alert Notices: Not Supported 00:20:37.229 EGE Aggregate Log Change Notices: Not Supported 00:20:37.229 Normal NVM Subsystem Shutdown event: Not Supported 00:20:37.229 Zone Descriptor Change Notices: Not Supported 00:20:37.229 Discovery Log Change Notices: Not Supported 00:20:37.229 Controller Attributes 00:20:37.229 128-bit Host Identifier: Supported 00:20:37.229 Non-Operational Permissive Mode: Not Supported 00:20:37.229 NVM Sets: Not Supported 00:20:37.229 Read Recovery Levels: Not Supported 00:20:37.229 Endurance Groups: Not Supported 00:20:37.229 Predictable Latency Mode: Not Supported 00:20:37.229 Traffic Based Keep ALive: Not Supported 00:20:37.229 Namespace Granularity: Not Supported 00:20:37.229 SQ Associations: Not Supported 00:20:37.229 UUID List: Not Supported 00:20:37.229 Multi-Domain Subsystem: Not Supported 00:20:37.229 Fixed Capacity Management: Not Supported 00:20:37.229 Variable Capacity Management: Not Supported 00:20:37.229 Delete Endurance Group: Not Supported 00:20:37.229 Delete NVM Set: Not Supported 00:20:37.229 Extended LBA Formats Supported: Not Supported 00:20:37.229 Flexible Data Placement Supported: Not Supported 00:20:37.229 00:20:37.229 Controller Memory Buffer Support 00:20:37.229 ================================ 00:20:37.229 Supported: No 00:20:37.229 00:20:37.229 Persistent Memory Region Support 00:20:37.229 ================================ 00:20:37.229 Supported: No 00:20:37.229 00:20:37.229 Admin Command Set Attributes 00:20:37.229 ============================ 00:20:37.229 Security Send/Receive: Not Supported 00:20:37.229 Format NVM: Not Supported 00:20:37.229 Firmware Activate/Download: Not Supported 00:20:37.229 Namespace Management: Not Supported 00:20:37.229 Device Self-Test: Not Supported 00:20:37.229 Directives: Not Supported 00:20:37.229 NVMe-MI: Not Supported 00:20:37.229 Virtualization Management: Not Supported 00:20:37.229 Doorbell Buffer Config: Not Supported 00:20:37.229 Get LBA Status Capability: Not Supported 00:20:37.229 Command & Feature Lockdown Capability: Not Supported 00:20:37.229 Abort Command Limit: 4 00:20:37.229 Async Event Request Limit: 4 00:20:37.229 Number of Firmware Slots: N/A 00:20:37.229 Firmware Slot 1 Read-Only: N/A 00:20:37.229 Firmware Activation Without Reset: N/A 00:20:37.229 Multiple Update Detection Support: N/A 00:20:37.229 Firmware Update Granularity: No Information Provided 00:20:37.229 Per-Namespace SMART Log: No 00:20:37.229 Asymmetric Namespace Access Log Page: Not Supported 00:20:37.229 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:37.229 Command Effects Log Page: Supported 00:20:37.229 Get Log Page Extended Data: Supported 00:20:37.229 Telemetry Log Pages: Not Supported 00:20:37.229 Persistent Event Log Pages: Not Supported 00:20:37.229 Supported Log Pages Log Page: May Support 00:20:37.229 Commands Supported & Effects Log Page: Not Supported 00:20:37.229 Feature Identifiers & Effects Log Page:May Support 00:20:37.229 NVMe-MI Commands & Effects Log Page: May Support 00:20:37.229 Data Area 4 for Telemetry Log: Not Supported 00:20:37.229 Error Log Page Entries Supported: 128 00:20:37.229 Keep Alive: Supported 00:20:37.229 Keep Alive Granularity: 10000 ms 00:20:37.229 00:20:37.229 NVM Command Set Attributes 00:20:37.229 ========================== 00:20:37.229 Submission Queue Entry Size 00:20:37.229 Max: 64 00:20:37.229 Min: 64 00:20:37.229 Completion Queue Entry Size 00:20:37.229 Max: 16 00:20:37.229 Min: 16 00:20:37.229 Number of Namespaces: 32 00:20:37.229 Compare Command: Supported 00:20:37.229 Write Uncorrectable Command: Not Supported 00:20:37.229 Dataset Management Command: Supported 00:20:37.229 Write Zeroes Command: Supported 00:20:37.229 Set Features Save Field: Not Supported 00:20:37.229 Reservations: Supported 00:20:37.229 Timestamp: Not Supported 00:20:37.229 Copy: Supported 00:20:37.229 Volatile Write Cache: Present 00:20:37.229 Atomic Write Unit (Normal): 1 00:20:37.229 Atomic Write Unit (PFail): 1 00:20:37.229 Atomic Compare & Write Unit: 1 00:20:37.229 Fused Compare & Write: Supported 00:20:37.229 Scatter-Gather List 00:20:37.229 SGL Command Set: Supported 00:20:37.229 SGL Keyed: Supported 00:20:37.229 SGL Bit Bucket Descriptor: Not Supported 00:20:37.229 SGL Metadata Pointer: Not Supported 00:20:37.229 Oversized SGL: Not Supported 00:20:37.229 SGL Metadata Address: Not Supported 00:20:37.229 SGL Offset: Supported 00:20:37.229 Transport SGL Data Block: Not Supported 00:20:37.229 Replay Protected Memory Block: Not Supported 00:20:37.229 00:20:37.229 Firmware Slot Information 00:20:37.229 ========================= 00:20:37.229 Active slot: 1 00:20:37.229 Slot 1 Firmware Revision: 24.09 00:20:37.229 00:20:37.229 00:20:37.229 Commands Supported and Effects 00:20:37.229 ============================== 00:20:37.229 Admin Commands 00:20:37.229 -------------- 00:20:37.229 Get Log Page (02h): Supported 00:20:37.229 Identify (06h): Supported 00:20:37.229 Abort (08h): Supported 00:20:37.229 Set Features (09h): Supported 00:20:37.229 Get Features (0Ah): Supported 00:20:37.229 Asynchronous Event Request (0Ch): Supported 00:20:37.229 Keep Alive (18h): Supported 00:20:37.229 I/O Commands 00:20:37.229 ------------ 00:20:37.229 Flush (00h): Supported LBA-Change 00:20:37.229 Write (01h): Supported LBA-Change 00:20:37.229 Read (02h): Supported 00:20:37.229 Compare (05h): Supported 00:20:37.229 Write Zeroes (08h): Supported LBA-Change 00:20:37.229 Dataset Management (09h): Supported LBA-Change 00:20:37.229 Copy (19h): Supported LBA-Change 00:20:37.229 00:20:37.229 Error Log 00:20:37.230 ========= 00:20:37.230 00:20:37.230 Arbitration 00:20:37.230 =========== 00:20:37.230 Arbitration Burst: 1 00:20:37.230 00:20:37.230 Power Management 00:20:37.230 ================ 00:20:37.230 Number of Power States: 1 00:20:37.230 Current Power State: Power State #0 00:20:37.230 Power State #0: 00:20:37.230 Max Power: 0.00 W 00:20:37.230 Non-Operational State: Operational 00:20:37.230 Entry Latency: Not Reported 00:20:37.230 Exit Latency: Not Reported 00:20:37.230 Relative Read Throughput: 0 00:20:37.230 Relative Read Latency: 0 00:20:37.230 Relative Write Throughput: 0 00:20:37.230 Relative Write Latency: 0 00:20:37.230 Idle Power: Not Reported 00:20:37.230 Active Power: Not Reported 00:20:37.230 Non-Operational Permissive Mode: Not Supported 00:20:37.230 00:20:37.230 Health Information 00:20:37.230 ================== 00:20:37.230 Critical Warnings: 00:20:37.230 Available Spare Space: OK 00:20:37.230 Temperature: OK 00:20:37.230 Device Reliability: OK 00:20:37.230 Read Only: No 00:20:37.230 Volatile Memory Backup: OK 00:20:37.230 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:37.230 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:37.230 Available Spare: 0% 00:20:37.230 Available Spare Threshold: 0% 00:20:37.230 Life Percentage Used:[2024-07-16 01:13:53.159341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.159353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.159364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.159387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1731fc0, cid 7, qid 0 00:20:37.230 [2024-07-16 01:13:53.159549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.159561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.159568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.159575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731fc0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.159620] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:37.230 [2024-07-16 01:13:53.159639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731540) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.159654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.230 [2024-07-16 01:13:53.159663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17316c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.159671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.230 [2024-07-16 01:13:53.159679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1731840) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.159687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.230 [2024-07-16 01:13:53.159695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.159703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.230 [2024-07-16 01:13:53.159715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.159723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.159730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.159740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.159763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.230 [2024-07-16 01:13:53.159904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.159919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.159926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.159933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.159944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.159952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.159969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.159980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.160007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.230 [2024-07-16 01:13:53.160117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.160129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.160136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.160151] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:37.230 [2024-07-16 01:13:53.160158] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:37.230 [2024-07-16 01:13:53.160174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.160199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.160220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.230 [2024-07-16 01:13:53.160306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.160318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.160324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160335] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.160352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.160379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.160399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.230 [2024-07-16 01:13:53.160492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.160507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.160514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.160537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.160563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.160584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.230 [2024-07-16 01:13:53.160693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.160708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.160715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.160738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.160765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.160786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.230 [2024-07-16 01:13:53.160894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.160909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.160916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.160939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.160948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.164965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d16e0) 00:20:37.230 [2024-07-16 01:13:53.164983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-07-16 01:13:53.165007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17319c0, cid 3, qid 0 00:20:37.230 [2024-07-16 01:13:53.165115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.230 [2024-07-16 01:13:53.165130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.230 [2024-07-16 01:13:53.165137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.230 [2024-07-16 01:13:53.165144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17319c0) on tqpair=0x16d16e0 00:20:37.230 [2024-07-16 01:13:53.165161] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:37.230 0% 00:20:37.230 Data Units Read: 0 00:20:37.230 Data Units Written: 0 00:20:37.230 Host Read Commands: 0 00:20:37.230 Host Write Commands: 0 00:20:37.230 Controller Busy Time: 0 minutes 00:20:37.230 Power Cycles: 0 00:20:37.230 Power On Hours: 0 hours 00:20:37.230 Unsafe Shutdowns: 0 00:20:37.230 Unrecoverable Media Errors: 0 00:20:37.230 Lifetime Error Log Entries: 0 00:20:37.230 Warning Temperature Time: 0 minutes 00:20:37.230 Critical Temperature Time: 0 minutes 00:20:37.230 00:20:37.230 Number of Queues 00:20:37.230 ================ 00:20:37.230 Number of I/O Submission Queues: 127 00:20:37.230 Number of I/O Completion Queues: 127 00:20:37.230 00:20:37.230 Active Namespaces 00:20:37.230 ================= 00:20:37.230 Namespace ID:1 00:20:37.230 Error Recovery Timeout: Unlimited 00:20:37.230 Command Set Identifier: NVM (00h) 00:20:37.230 Deallocate: Supported 00:20:37.230 Deallocated/Unwritten Error: Not Supported 00:20:37.230 Deallocated Read Value: Unknown 00:20:37.230 Deallocate in Write Zeroes: Not Supported 00:20:37.231 Deallocated Guard Field: 0xFFFF 00:20:37.231 Flush: Supported 00:20:37.231 Reservation: Supported 00:20:37.231 Namespace Sharing Capabilities: Multiple Controllers 00:20:37.231 Size (in LBAs): 131072 (0GiB) 00:20:37.231 Capacity (in LBAs): 131072 (0GiB) 00:20:37.231 Utilization (in LBAs): 131072 (0GiB) 00:20:37.231 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:37.231 EUI64: ABCDEF0123456789 00:20:37.231 UUID: 762037be-5809-47c5-a959-787f306ed269 00:20:37.231 Thin Provisioning: Not Supported 00:20:37.231 Per-NS Atomic Units: Yes 00:20:37.231 Atomic Boundary Size (Normal): 0 00:20:37.231 Atomic Boundary Size (PFail): 0 00:20:37.231 Atomic Boundary Offset: 0 00:20:37.231 Maximum Single Source Range Length: 65535 00:20:37.231 Maximum Copy Length: 65535 00:20:37.231 Maximum Source Range Count: 1 00:20:37.231 NGUID/EUI64 Never Reused: No 00:20:37.231 Namespace Write Protected: No 00:20:37.231 Number of LBA Formats: 1 00:20:37.231 Current LBA Format: LBA Format #00 00:20:37.231 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:37.231 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:37.231 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:37.231 rmmod nvme_tcp 00:20:37.488 rmmod nvme_fabrics 00:20:37.488 rmmod nvme_keyring 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 9568 ']' 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 9568 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 9568 ']' 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 9568 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 9568 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 9568' 00:20:37.488 killing process with pid 9568 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 9568 00:20:37.488 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 9568 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.745 01:13:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.646 01:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:39.646 00:20:39.646 real 0m5.394s 00:20:39.646 user 0m4.460s 00:20:39.646 sys 0m1.791s 00:20:39.646 01:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.646 01:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.646 ************************************ 00:20:39.646 END TEST nvmf_identify 00:20:39.646 ************************************ 00:20:39.646 01:13:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:39.646 01:13:55 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:39.646 01:13:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:39.646 01:13:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.646 01:13:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.646 ************************************ 00:20:39.646 START TEST nvmf_perf 00:20:39.646 ************************************ 00:20:39.646 01:13:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:39.925 * Looking for test storage... 00:20:39.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:39.925 01:13:55 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.926 01:13:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:42.459 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.459 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.459 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:42.460 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:42.460 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:42.460 Found net devices under 0000:09:00.0: cvl_0_0 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:42.460 Found net devices under 0000:09:00.1: cvl_0_1 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:20:42.460 00:20:42.460 --- 10.0.0.2 ping statistics --- 00:20:42.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.460 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:42.460 00:20:42.460 --- 10.0.0.1 ping statistics --- 00:20:42.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.460 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:42.460 01:13:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=11643 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 11643 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 11643 ']' 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.460 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:42.460 [2024-07-16 01:13:58.054329] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:20:42.460 [2024-07-16 01:13:58.054408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.460 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.460 [2024-07-16 01:13:58.117036] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.460 [2024-07-16 01:13:58.222789] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.460 [2024-07-16 01:13:58.222835] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.460 [2024-07-16 01:13:58.222863] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.460 [2024-07-16 01:13:58.222874] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.460 [2024-07-16 01:13:58.222890] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.460 [2024-07-16 01:13:58.223037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.460 [2024-07-16 01:13:58.223064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.460 [2024-07-16 01:13:58.223109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.461 [2024-07-16 01:13:58.223113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:42.461 01:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:45.739 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:45.739 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:45.739 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:20:45.739 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.997 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:45.997 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:20:45.997 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:45.997 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:45.997 01:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:46.255 [2024-07-16 01:14:02.196309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.255 01:14:02 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.528 01:14:02 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:46.528 01:14:02 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:46.785 01:14:02 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:46.785 01:14:02 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:47.043 01:14:02 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.300 [2024-07-16 01:14:03.187891] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.300 01:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:47.557 01:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:20:47.557 01:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:47.557 01:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:47.557 01:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:48.927 Initializing NVMe Controllers 00:20:48.927 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:20:48.927 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:20:48.927 Initialization complete. Launching workers. 00:20:48.927 ======================================================== 00:20:48.927 Latency(us) 00:20:48.927 Device Information : IOPS MiB/s Average min max 00:20:48.927 PCIE (0000:0b:00.0) NSID 1 from core 0: 85131.16 332.54 375.29 37.21 5305.65 00:20:48.927 ======================================================== 00:20:48.927 Total : 85131.16 332.54 375.29 37.21 5305.65 00:20:48.927 00:20:48.928 01:14:04 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.928 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.300 Initializing NVMe Controllers 00:20:50.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:50.300 Initialization complete. Launching workers. 00:20:50.300 ======================================================== 00:20:50.300 Latency(us) 00:20:50.300 Device Information : IOPS MiB/s Average min max 00:20:50.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 147.00 0.57 6894.29 151.28 45788.27 00:20:50.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16483.03 7929.07 47893.88 00:20:50.300 ======================================================== 00:20:50.300 Total : 208.00 0.81 9706.37 151.28 47893.88 00:20:50.300 00:20:50.300 01:14:05 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.300 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.232 Initializing NVMe Controllers 00:20:51.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:51.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:51.232 Initialization complete. Launching workers. 00:20:51.232 ======================================================== 00:20:51.232 Latency(us) 00:20:51.232 Device Information : IOPS MiB/s Average min max 00:20:51.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8182.32 31.96 3912.40 576.93 8918.90 00:20:51.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3818.68 14.92 8424.00 6769.12 16718.34 00:20:51.232 ======================================================== 00:20:51.232 Total : 12001.00 46.88 5347.98 576.93 16718.34 00:20:51.232 00:20:51.232 01:14:07 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:51.233 01:14:07 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:51.233 01:14:07 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:51.233 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.818 Initializing NVMe Controllers 00:20:53.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.818 Controller IO queue size 128, less than required. 00:20:53.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.818 Controller IO queue size 128, less than required. 00:20:53.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:53.818 Initialization complete. Launching workers. 00:20:53.818 ======================================================== 00:20:53.818 Latency(us) 00:20:53.818 Device Information : IOPS MiB/s Average min max 00:20:53.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1392.64 348.16 93880.13 61775.95 126962.08 00:20:53.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 568.85 142.21 242068.85 104903.55 378898.79 00:20:53.818 ======================================================== 00:20:53.818 Total : 1961.49 490.37 136856.37 61775.95 378898.79 00:20:53.818 00:20:53.818 01:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:53.818 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.818 No valid NVMe controllers or AIO or URING devices found 00:20:53.818 Initializing NVMe Controllers 00:20:53.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.818 Controller IO queue size 128, less than required. 00:20:53.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.818 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:53.818 Controller IO queue size 128, less than required. 00:20:53.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.818 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:53.818 WARNING: Some requested NVMe devices were skipped 00:20:54.076 01:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:54.076 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.604 Initializing NVMe Controllers 00:20:56.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.604 Controller IO queue size 128, less than required. 00:20:56.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.604 Controller IO queue size 128, less than required. 00:20:56.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:56.604 Initialization complete. Launching workers. 00:20:56.604 00:20:56.604 ==================== 00:20:56.604 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:56.604 TCP transport: 00:20:56.604 polls: 8140 00:20:56.604 idle_polls: 5774 00:20:56.604 sock_completions: 2366 00:20:56.604 nvme_completions: 4707 00:20:56.604 submitted_requests: 7090 00:20:56.604 queued_requests: 1 00:20:56.604 00:20:56.604 ==================== 00:20:56.604 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:56.604 TCP transport: 00:20:56.604 polls: 10819 00:20:56.604 idle_polls: 8315 00:20:56.604 sock_completions: 2504 00:20:56.604 nvme_completions: 4951 00:20:56.604 submitted_requests: 7400 00:20:56.604 queued_requests: 1 00:20:56.604 ======================================================== 00:20:56.604 Latency(us) 00:20:56.604 Device Information : IOPS MiB/s Average min max 00:20:56.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1173.97 293.49 110419.89 74256.68 183072.37 00:20:56.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1234.84 308.71 105278.85 45713.71 137661.90 00:20:56.604 ======================================================== 00:20:56.604 Total : 2408.82 602.20 107784.41 45713.71 183072.37 00:20:56.604 00:20:56.604 01:14:12 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:56.604 01:14:12 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.861 rmmod nvme_tcp 00:20:56.861 rmmod nvme_fabrics 00:20:56.861 rmmod nvme_keyring 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 11643 ']' 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 11643 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 11643 ']' 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 11643 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 11643 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 11643' 00:20:56.861 killing process with pid 11643 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 11643 00:20:56.861 01:14:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 11643 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.756 01:14:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.657 01:14:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.657 00:21:00.657 real 0m20.797s 00:21:00.657 user 1m2.562s 00:21:00.657 sys 0m5.315s 00:21:00.657 01:14:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.657 01:14:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:00.657 ************************************ 00:21:00.657 END TEST nvmf_perf 00:21:00.657 ************************************ 00:21:00.657 01:14:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:00.657 01:14:16 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:00.657 01:14:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:00.657 01:14:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.657 01:14:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.657 ************************************ 00:21:00.657 START TEST nvmf_fio_host 00:21:00.657 ************************************ 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:00.657 * Looking for test storage... 00:21:00.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.657 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.658 01:14:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:03.184 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:03.184 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:03.184 Found net devices under 0000:09:00.0: cvl_0_0 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:03.184 Found net devices under 0000:09:00.1: cvl_0_1 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.184 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:03.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:21:03.185 00:21:03.185 --- 10.0.0.2 ping statistics --- 00:21:03.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.185 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:21:03.185 00:21:03.185 --- 10.0.0.1 ping statistics --- 00:21:03.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.185 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=15613 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 15613 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 15613 ']' 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.185 01:14:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.185 [2024-07-16 01:14:18.838894] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:21:03.185 [2024-07-16 01:14:18.839005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.185 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.185 [2024-07-16 01:14:18.906783] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.185 [2024-07-16 01:14:19.018583] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.185 [2024-07-16 01:14:19.018637] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.185 [2024-07-16 01:14:19.018650] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.185 [2024-07-16 01:14:19.018661] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.185 [2024-07-16 01:14:19.018670] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.185 [2024-07-16 01:14:19.018723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.185 [2024-07-16 01:14:19.018791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.185 [2024-07-16 01:14:19.018848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.185 [2024-07-16 01:14:19.018851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.185 01:14:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.185 01:14:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:03.185 01:14:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:03.442 [2024-07-16 01:14:19.418722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.699 01:14:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:03.699 01:14:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.699 01:14:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.699 01:14:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:03.957 Malloc1 00:21:03.957 01:14:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.214 01:14:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:04.472 01:14:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.729 [2024-07-16 01:14:20.593189] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.729 01:14:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:04.986 01:14:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.243 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:05.243 fio-3.35 00:21:05.243 Starting 1 thread 00:21:05.243 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.770 00:21:07.770 test: (groupid=0, jobs=1): err= 0: pid=15973: Tue Jul 16 01:14:23 2024 00:21:07.770 read: IOPS=8931, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2006msec) 00:21:07.770 slat (usec): min=2, max=113, avg= 2.80, stdev= 1.57 00:21:07.770 clat (usec): min=2309, max=13409, avg=7837.19, stdev=647.87 00:21:07.770 lat (usec): min=2332, max=13412, avg=7839.99, stdev=647.80 00:21:07.770 clat percentiles (usec): 00:21:07.770 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:21:07.770 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 8029], 00:21:07.770 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:21:07.770 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11338], 99.95th=[12256], 00:21:07.770 | 99.99th=[13435] 00:21:07.770 bw ( KiB/s): min=35104, max=36216, per=99.91%, avg=35696.00, stdev=456.16, samples=4 00:21:07.770 iops : min= 8776, max= 9054, avg=8924.00, stdev=114.04, samples=4 00:21:07.770 write: IOPS=8947, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2006msec); 0 zone resets 00:21:07.770 slat (nsec): min=2387, max=90067, avg=2964.07, stdev=1231.58 00:21:07.770 clat (usec): min=975, max=12080, avg=6449.55, stdev=536.22 00:21:07.770 lat (usec): min=982, max=12083, avg=6452.51, stdev=536.20 00:21:07.770 clat percentiles (usec): 00:21:07.770 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 6063], 00:21:07.770 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:21:07.770 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:21:07.770 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[10159], 99.95th=[11338], 00:21:07.770 | 99.99th=[11994] 00:21:07.770 bw ( KiB/s): min=35584, max=35856, per=99.98%, avg=35780.00, stdev=130.88, samples=4 00:21:07.770 iops : min= 8896, max= 8964, avg=8945.00, stdev=32.72, samples=4 00:21:07.770 lat (usec) : 1000=0.01% 00:21:07.770 lat (msec) : 2=0.02%, 4=0.09%, 10=99.71%, 20=0.18% 00:21:07.770 cpu : usr=61.75%, sys=35.21%, ctx=91, majf=0, minf=35 00:21:07.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:07.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:07.770 issued rwts: total=17917,17948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:07.770 00:21:07.770 Run status group 0 (all jobs): 00:21:07.770 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2006-2006msec 00:21:07.770 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2006-2006msec 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:07.770 01:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:07.770 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:07.770 fio-3.35 00:21:07.770 Starting 1 thread 00:21:08.026 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.399 [2024-07-16 01:14:25.381808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6840 is same with the state(5) to be set 00:21:09.399 [2024-07-16 01:14:25.381894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6840 is same with the state(5) to be set 00:21:10.332 00:21:10.332 test: (groupid=0, jobs=1): err= 0: pid=16308: Tue Jul 16 01:14:26 2024 00:21:10.332 read: IOPS=8170, BW=128MiB/s (134MB/s)(256MiB/2008msec) 00:21:10.332 slat (usec): min=2, max=116, avg= 3.89, stdev= 1.95 00:21:10.332 clat (usec): min=1982, max=55105, avg=9140.02, stdev=4157.12 00:21:10.332 lat (usec): min=1986, max=55108, avg=9143.92, stdev=4157.13 00:21:10.332 clat percentiles (usec): 00:21:10.332 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7046], 00:21:10.332 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9241], 00:21:10.332 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11600], 95.00th=[12780], 00:21:10.332 | 99.00th=[15664], 99.50th=[47973], 99.90th=[53740], 99.95th=[54789], 00:21:10.332 | 99.99th=[55313] 00:21:10.332 bw ( KiB/s): min=60800, max=78112, per=52.25%, avg=68304.00, stdev=7629.55, samples=4 00:21:10.332 iops : min= 3800, max= 4882, avg=4269.00, stdev=476.85, samples=4 00:21:10.332 write: IOPS=4906, BW=76.7MiB/s (80.4MB/s)(139MiB/1818msec); 0 zone resets 00:21:10.332 slat (usec): min=30, max=193, avg=34.72, stdev= 6.53 00:21:10.332 clat (usec): min=5852, max=20439, avg=11421.98, stdev=1994.68 00:21:10.332 lat (usec): min=5883, max=20471, avg=11456.70, stdev=1994.90 00:21:10.332 clat percentiles (usec): 00:21:10.332 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9634], 00:21:10.332 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11338], 60.00th=[11863], 00:21:10.332 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14222], 95.00th=[15008], 00:21:10.332 | 99.00th=[16319], 99.50th=[17433], 99.90th=[20055], 99.95th=[20317], 00:21:10.332 | 99.99th=[20317] 00:21:10.332 bw ( KiB/s): min=61312, max=80992, per=90.26%, avg=70856.00, stdev=8315.42, samples=4 00:21:10.332 iops : min= 3832, max= 5062, avg=4428.50, stdev=519.71, samples=4 00:21:10.332 lat (msec) : 2=0.01%, 4=0.11%, 10=56.66%, 20=42.69%, 50=0.30% 00:21:10.332 lat (msec) : 100=0.23% 00:21:10.332 cpu : usr=74.59%, sys=23.77%, ctx=39, majf=0, minf=57 00:21:10.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:10.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:10.332 issued rwts: total=16407,8920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:10.332 00:21:10.333 Run status group 0 (all jobs): 00:21:10.333 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (269MB), run=2008-2008msec 00:21:10.333 WRITE: bw=76.7MiB/s (80.4MB/s), 76.7MiB/s-76.7MiB/s (80.4MB/s-80.4MB/s), io=139MiB (146MB), run=1818-1818msec 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.333 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.333 rmmod nvme_tcp 00:21:10.333 rmmod nvme_fabrics 00:21:10.333 rmmod nvme_keyring 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 15613 ']' 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 15613 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 15613 ']' 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 15613 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 15613 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 15613' 00:21:10.590 killing process with pid 15613 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 15613 00:21:10.590 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 15613 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.849 01:14:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.774 01:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.774 00:21:12.774 real 0m12.207s 00:21:12.774 user 0m35.552s 00:21:12.774 sys 0m4.186s 00:21:12.774 01:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:12.774 01:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.774 ************************************ 00:21:12.774 END TEST nvmf_fio_host 00:21:12.774 ************************************ 00:21:12.774 01:14:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:12.774 01:14:28 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:12.774 01:14:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:12.774 01:14:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:12.774 01:14:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:12.774 ************************************ 00:21:12.774 START TEST nvmf_failover 00:21:12.774 ************************************ 00:21:12.774 01:14:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:13.033 * Looking for test storage... 00:21:13.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.034 01:14:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:14.933 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:14.933 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.933 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:15.191 Found net devices under 0000:09:00.0: cvl_0_0 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:15.191 Found net devices under 0000:09:00.1: cvl_0_1 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.191 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.192 01:14:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:21:15.192 00:21:15.192 --- 10.0.0.2 ping statistics --- 00:21:15.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.192 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:21:15.192 00:21:15.192 --- 10.0.0.1 ping statistics --- 00:21:15.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.192 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=18613 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 18613 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 18613 ']' 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.192 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:15.192 [2024-07-16 01:14:31.153178] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:21:15.192 [2024-07-16 01:14:31.153280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.449 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.449 [2024-07-16 01:14:31.219080] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:15.449 [2024-07-16 01:14:31.329302] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.449 [2024-07-16 01:14:31.329353] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.449 [2024-07-16 01:14:31.329376] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.449 [2024-07-16 01:14:31.329387] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.449 [2024-07-16 01:14:31.329397] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.449 [2024-07-16 01:14:31.329480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.450 [2024-07-16 01:14:31.329584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.450 [2024-07-16 01:14:31.329593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.450 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.450 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:15.450 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.450 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.450 01:14:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:15.739 01:14:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.739 01:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:15.739 [2024-07-16 01:14:31.683114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.739 01:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:15.997 Malloc0 00:21:15.997 01:14:31 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.255 01:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.512 01:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.770 [2024-07-16 01:14:32.718404] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.770 01:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:17.028 [2024-07-16 01:14:32.963141] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:17.028 01:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:17.286 [2024-07-16 01:14:33.224104] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=18906 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 18906 /var/tmp/bdevperf.sock 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 18906 ']' 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.286 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:17.852 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.852 01:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:17.852 01:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.109 NVMe0n1 00:21:18.109 01:14:34 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.366 00:21:18.366 01:14:34 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=19010 00:21:18.366 01:14:34 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.366 01:14:34 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:19.740 01:14:35 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.740 [2024-07-16 01:14:35.597646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170c0c0 is same with the state(5) to be set 00:21:19.740 01:14:35 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:23.020 01:14:38 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.020 00:21:23.020 01:14:38 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:23.278 01:14:39 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:26.558 01:14:42 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.558 [2024-07-16 01:14:42.471196] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.558 01:14:42 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:27.931 01:14:43 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:27.931 [2024-07-16 01:14:43.753179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.931 [2024-07-16 01:14:43.753702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e0e0 is same with the state(5) to be set 00:21:27.932 01:14:43 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 19010 00:21:34.489 0 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 18906 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 18906 ']' 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 18906 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 18906 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 18906' 00:21:34.489 killing process with pid 18906 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 18906 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 18906 00:21:34.489 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:34.489 [2024-07-16 01:14:33.287905] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:21:34.490 [2024-07-16 01:14:33.288052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18906 ] 00:21:34.490 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.490 [2024-07-16 01:14:33.347042] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.490 [2024-07-16 01:14:33.457921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.490 Running I/O for 15 seconds... 00:21:34.490 [2024-07-16 01:14:35.598057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.598721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.598983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.598998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.599038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.599066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.599095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.599124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.599151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.599179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.490 [2024-07-16 01:14:35.599207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.599235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.599287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.599319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.599346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.599374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.490 [2024-07-16 01:14:35.599394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.490 [2024-07-16 01:14:35.599406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.491 [2024-07-16 01:14:35.599839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.599867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.599907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.599936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.599975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.599990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.491 [2024-07-16 01:14:35.600713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.491 [2024-07-16 01:14:35.600726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.600954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.600991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.601944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.601981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.602006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.602023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.492 [2024-07-16 01:14:35.602038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.492 [2024-07-16 01:14:35.602055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:35.602072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:35.602102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:35.602132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32380 is same with the state(5) to be set 00:21:34.493 [2024-07-16 01:14:35.602166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.493 [2024-07-16 01:14:35.602178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.493 [2024-07-16 01:14:35.602189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:21:34.493 [2024-07-16 01:14:35.602202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602290] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf32380 was disconnected and freed. reset controller. 00:21:34.493 [2024-07-16 01:14:35.602310] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:34.493 [2024-07-16 01:14:35.602358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.493 [2024-07-16 01:14:35.602378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.493 [2024-07-16 01:14:35.602407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.493 [2024-07-16 01:14:35.602435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.493 [2024-07-16 01:14:35.602462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:35.602475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.493 [2024-07-16 01:14:35.605814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.493 [2024-07-16 01:14:35.605851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0c2e0 (9): Bad file descriptor 00:21:34.493 [2024-07-16 01:14:35.729645] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.493 [2024-07-16 01:14:39.213777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.213838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.213877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.213905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.213923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.213937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.213992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.214017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.214062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.214092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.214123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.214152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.493 [2024-07-16 01:14:39.214623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.493 [2024-07-16 01:14:39.214807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.493 [2024-07-16 01:14:39.214820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.214835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.214848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.214862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.214875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.214890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.214903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.214917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.214930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.214945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.214981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.214999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.494 [2024-07-16 01:14:39.215876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.494 [2024-07-16 01:14:39.215889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.215905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.215918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.215934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.215993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.216959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.216993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.217007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.217023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.217037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.217056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.217071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.217087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.217101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.217118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.495 [2024-07-16 01:14:39.217131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.495 [2024-07-16 01:14:39.217147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:39.217787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6ee0 is same with the state(5) to be set 00:21:34.496 [2024-07-16 01:14:39.217823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.496 [2024-07-16 01:14:39.217836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.496 [2024-07-16 01:14:39.217848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83272 len:8 PRP1 0x0 PRP2 0x0 00:21:34.496 [2024-07-16 01:14:39.217861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.217925] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d6ee0 was disconnected and freed. reset controller. 00:21:34.496 [2024-07-16 01:14:39.217968] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:34.496 [2024-07-16 01:14:39.218008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.496 [2024-07-16 01:14:39.218027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.218043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.496 [2024-07-16 01:14:39.218057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.218071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.496 [2024-07-16 01:14:39.218085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.218100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.496 [2024-07-16 01:14:39.218113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:39.218127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.496 [2024-07-16 01:14:39.221462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.496 [2024-07-16 01:14:39.221501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0c2e0 (9): Bad file descriptor 00:21:34.496 [2024-07-16 01:14:39.298820] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.496 [2024-07-16 01:14:43.754783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.754824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.754852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.754868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.754884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.754898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.754915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.754929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.754967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.754988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.755004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.755017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.755032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.755046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.755061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.755074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.755088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.496 [2024-07-16 01:14:43.755102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.496 [2024-07-16 01:14:43.755117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.755971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.497 [2024-07-16 01:14:43.755986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.497 [2024-07-16 01:14:43.756015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.756030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.497 [2024-07-16 01:14:43.756043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.756058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.497 [2024-07-16 01:14:43.756071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.756086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.497 [2024-07-16 01:14:43.756103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.756118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.497 [2024-07-16 01:14:43.756131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.756146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.497 [2024-07-16 01:14:43.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.497 [2024-07-16 01:14:43.756173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.497 [2024-07-16 01:14:43.756186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.756953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.756993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.757024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.757060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.757089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.757146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.757176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.498 [2024-07-16 01:14:43.757205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.498 [2024-07-16 01:14:43.757224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.499 [2024-07-16 01:14:43.757240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.499 [2024-07-16 01:14:43.757285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.499 [2024-07-16 01:14:43.757314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.499 [2024-07-16 01:14:43.757342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.499 [2024-07-16 01:14:43.757375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.499 [2024-07-16 01:14:43.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.499 [2024-07-16 01:14:43.757431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22056 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22064 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22072 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22088 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22096 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22104 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.757948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22120 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.757969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.757984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.757996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22128 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22136 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22152 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22160 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22168 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22184 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.499 [2024-07-16 01:14:43.758411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.499 [2024-07-16 01:14:43.758423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22192 len:8 PRP1 0x0 PRP2 0x0 00:21:34.499 [2024-07-16 01:14:43.758436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.499 [2024-07-16 01:14:43.758448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22200 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22216 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22224 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22232 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22248 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22256 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22264 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.758921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.758934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.758963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.758992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22280 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22288 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22296 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22312 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22320 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22328 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22344 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22352 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21648 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.500 [2024-07-16 01:14:43.759546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.500 [2024-07-16 01:14:43.759558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21656 len:8 PRP1 0x0 PRP2 0x0 00:21:34.500 [2024-07-16 01:14:43.759571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759631] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf3bf80 was disconnected and freed. reset controller. 00:21:34.500 [2024-07-16 01:14:43.759651] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:34.500 [2024-07-16 01:14:43.759686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.500 [2024-07-16 01:14:43.759721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.500 [2024-07-16 01:14:43.759738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.500 [2024-07-16 01:14:43.759756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.501 [2024-07-16 01:14:43.759771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.501 [2024-07-16 01:14:43.759784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.501 [2024-07-16 01:14:43.759799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.501 [2024-07-16 01:14:43.759814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.501 [2024-07-16 01:14:43.759827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.501 [2024-07-16 01:14:43.763149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.501 [2024-07-16 01:14:43.763189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0c2e0 (9): Bad file descriptor 00:21:34.501 [2024-07-16 01:14:43.796713] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.501 00:21:34.501 Latency(us) 00:21:34.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.501 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:34.501 Verification LBA range: start 0x0 length 0x4000 00:21:34.501 NVMe0n1 : 15.01 8415.15 32.87 586.23 0.00 14191.15 561.30 19418.07 00:21:34.501 =================================================================================================================== 00:21:34.501 Total : 8415.15 32.87 586.23 0.00 14191.15 561.30 19418.07 00:21:34.501 Received shutdown signal, test time was about 15.000000 seconds 00:21:34.501 00:21:34.501 Latency(us) 00:21:34.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.501 =================================================================================================================== 00:21:34.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=20786 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 20786 /var/tmp/bdevperf.sock 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 20786 ']' 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.501 01:14:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:34.501 01:14:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.501 01:14:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:34.501 01:14:50 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:34.501 [2024-07-16 01:14:50.390187] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:34.501 01:14:50 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:34.758 [2024-07-16 01:14:50.670932] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:34.758 01:14:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.014 NVMe0n1 00:21:35.014 01:14:50 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.578 00:21:35.578 01:14:51 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.141 00:21:36.141 01:14:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.141 01:14:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:36.398 01:14:52 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.654 01:14:52 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:39.931 01:14:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.931 01:14:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:39.931 01:14:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=21451 00:21:39.931 01:14:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.931 01:14:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 21451 00:21:40.918 0 00:21:40.918 01:14:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:40.918 [2024-07-16 01:14:49.833327] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:21:40.918 [2024-07-16 01:14:49.833418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20786 ] 00:21:40.918 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.918 [2024-07-16 01:14:49.892130] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.918 [2024-07-16 01:14:49.997256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.918 [2024-07-16 01:14:52.404711] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:40.918 [2024-07-16 01:14:52.404813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.918 [2024-07-16 01:14:52.404837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.918 [2024-07-16 01:14:52.404855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.918 [2024-07-16 01:14:52.404869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.918 [2024-07-16 01:14:52.404884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.918 [2024-07-16 01:14:52.404898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.918 [2024-07-16 01:14:52.404914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.918 [2024-07-16 01:14:52.404928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.918 [2024-07-16 01:14:52.404953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.918 [2024-07-16 01:14:52.405013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.918 [2024-07-16 01:14:52.405047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149d2e0 (9): Bad file descriptor 00:21:40.918 [2024-07-16 01:14:52.415323] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:40.918 Running I/O for 1 seconds... 00:21:40.918 00:21:40.918 Latency(us) 00:21:40.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.918 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:40.918 Verification LBA range: start 0x0 length 0x4000 00:21:40.918 NVMe0n1 : 1.01 8667.61 33.86 0.00 0.00 14708.13 3094.76 14660.65 00:21:40.918 =================================================================================================================== 00:21:40.918 Total : 8667.61 33.86 0.00 0.00 14708.13 3094.76 14660.65 00:21:40.918 01:14:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.918 01:14:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:41.207 01:14:57 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.465 01:14:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.465 01:14:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:41.722 01:14:57 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.980 01:14:57 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:45.259 01:15:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.259 01:15:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:45.259 01:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 20786 00:21:45.259 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 20786 ']' 00:21:45.259 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 20786 00:21:45.259 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:45.259 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.259 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 20786 00:21:45.517 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.517 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.517 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 20786' 00:21:45.517 killing process with pid 20786 00:21:45.517 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 20786 00:21:45.517 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 20786 00:21:45.517 01:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:45.517 01:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.775 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.775 rmmod nvme_tcp 00:21:46.033 rmmod nvme_fabrics 00:21:46.033 rmmod nvme_keyring 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 18613 ']' 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 18613 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 18613 ']' 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 18613 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 18613 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 18613' 00:21:46.033 killing process with pid 18613 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 18613 00:21:46.033 01:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 18613 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.293 01:15:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.198 01:15:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.198 00:21:48.198 real 0m35.416s 00:21:48.198 user 2m3.407s 00:21:48.198 sys 0m6.347s 00:21:48.198 01:15:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.198 01:15:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:48.198 ************************************ 00:21:48.198 END TEST nvmf_failover 00:21:48.198 ************************************ 00:21:48.198 01:15:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:48.198 01:15:04 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:48.198 01:15:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:48.198 01:15:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.198 01:15:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.456 ************************************ 00:21:48.456 START TEST nvmf_host_discovery 00:21:48.456 ************************************ 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:48.456 * Looking for test storage... 00:21:48.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.456 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.457 01:15:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:50.987 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:50.987 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:50.987 Found net devices under 0000:09:00.0: cvl_0_0 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:50.987 Found net devices under 0000:09:00.1: cvl_0_1 00:21:50.987 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:50.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:21:50.988 00:21:50.988 --- 10.0.0.2 ping statistics --- 00:21:50.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.988 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:21:50.988 00:21:50.988 --- 10.0.0.1 ping statistics --- 00:21:50.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.988 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=24288 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 24288 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 24288 ']' 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 [2024-07-16 01:15:06.634366] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:21:50.988 [2024-07-16 01:15:06.634460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.988 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.988 [2024-07-16 01:15:06.698489] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.988 [2024-07-16 01:15:06.801479] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.988 [2024-07-16 01:15:06.801554] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.988 [2024-07-16 01:15:06.801568] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.988 [2024-07-16 01:15:06.801578] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.988 [2024-07-16 01:15:06.801588] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.988 [2024-07-16 01:15:06.801639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 [2024-07-16 01:15:06.927473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 [2024-07-16 01:15:06.935640] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 null0 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 null1 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=24323 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 24323 /tmp/host.sock 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 24323 ']' 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:50.988 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.988 01:15:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.246 [2024-07-16 01:15:07.008786] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:21:51.247 [2024-07-16 01:15:07.008863] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24323 ] 00:21:51.247 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.247 [2024-07-16 01:15:07.068761] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.247 [2024-07-16 01:15:07.179912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:51.505 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.763 [2024-07-16 01:15:07.565361] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:51.763 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:51.764 01:15:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:52.695 [2024-07-16 01:15:08.365134] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:52.695 [2024-07-16 01:15:08.365159] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:52.695 [2024-07-16 01:15:08.365183] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:52.695 [2024-07-16 01:15:08.452494] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:52.695 [2024-07-16 01:15:08.637555] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:52.695 [2024-07-16 01:15:08.637579] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:52.953 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.954 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.210 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.211 [2024-07-16 01:15:08.977291] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:53.211 [2024-07-16 01:15:08.977594] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:53.211 [2024-07-16 01:15:08.977631] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:53.211 01:15:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:53.211 [2024-07-16 01:15:09.063892] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:53.211 01:15:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:53.211 [2024-07-16 01:15:09.168836] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:53.211 [2024-07-16 01:15:09.168866] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:53.211 [2024-07-16 01:15:09.168875] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:54.141 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.399 [2024-07-16 01:15:10.185962] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:54.399 [2024-07-16 01:15:10.186026] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:54.399 [2024-07-16 01:15:10.188758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.399 [2024-07-16 01:15:10.188792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.399 [2024-07-16 01:15:10.188824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.399 [2024-07-16 01:15:10.188838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.399 [2024-07-16 01:15:10.188851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.399 [2024-07-16 01:15:10.188864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.399 [2024-07-16 01:15:10.188878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.399 [2024-07-16 01:15:10.188892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.399 [2024-07-16 01:15:10.188905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:54.399 [2024-07-16 01:15:10.198749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.399 [2024-07-16 01:15:10.208796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.399 [2024-07-16 01:15:10.209057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.399 [2024-07-16 01:15:10.209089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.399 [2024-07-16 01:15:10.209107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.399 [2024-07-16 01:15:10.209131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.399 [2024-07-16 01:15:10.209166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.399 [2024-07-16 01:15:10.209184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.399 [2024-07-16 01:15:10.209202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.399 [2024-07-16 01:15:10.209223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.399 [2024-07-16 01:15:10.218884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.399 [2024-07-16 01:15:10.219063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.399 [2024-07-16 01:15:10.219092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.399 [2024-07-16 01:15:10.219109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.399 [2024-07-16 01:15:10.219131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.399 [2024-07-16 01:15:10.219151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.399 [2024-07-16 01:15:10.219166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.399 [2024-07-16 01:15:10.219179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.399 [2024-07-16 01:15:10.219199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.399 [2024-07-16 01:15:10.228951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.399 [2024-07-16 01:15:10.229168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.399 [2024-07-16 01:15:10.229197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.399 [2024-07-16 01:15:10.229214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.399 [2024-07-16 01:15:10.229236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.399 [2024-07-16 01:15:10.229287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.399 [2024-07-16 01:15:10.229307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.399 [2024-07-16 01:15:10.229321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.399 [2024-07-16 01:15:10.229341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:54.399 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:54.399 [2024-07-16 01:15:10.239054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.399 [2024-07-16 01:15:10.239194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.399 [2024-07-16 01:15:10.239223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.399 [2024-07-16 01:15:10.239240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.399 [2024-07-16 01:15:10.239266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.399 [2024-07-16 01:15:10.239286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.399 [2024-07-16 01:15:10.239301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.399 [2024-07-16 01:15:10.239315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.399 [2024-07-16 01:15:10.239347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.399 [2024-07-16 01:15:10.249131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.399 [2024-07-16 01:15:10.249316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.400 [2024-07-16 01:15:10.249344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.400 [2024-07-16 01:15:10.249366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.400 [2024-07-16 01:15:10.249389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.400 [2024-07-16 01:15:10.249422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.400 [2024-07-16 01:15:10.249440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.400 [2024-07-16 01:15:10.249453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.400 [2024-07-16 01:15:10.249472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.400 [2024-07-16 01:15:10.259203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.400 [2024-07-16 01:15:10.259382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.400 [2024-07-16 01:15:10.259409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.400 [2024-07-16 01:15:10.259424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.400 [2024-07-16 01:15:10.259445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.400 [2024-07-16 01:15:10.259491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.400 [2024-07-16 01:15:10.259510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.400 [2024-07-16 01:15:10.259524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.400 [2024-07-16 01:15:10.259544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.400 [2024-07-16 01:15:10.269286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.400 [2024-07-16 01:15:10.269511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.400 [2024-07-16 01:15:10.269539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.400 [2024-07-16 01:15:10.269555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.400 [2024-07-16 01:15:10.269577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.400 [2024-07-16 01:15:10.269611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.400 [2024-07-16 01:15:10.269629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.400 [2024-07-16 01:15:10.269642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.400 [2024-07-16 01:15:10.269662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:54.400 [2024-07-16 01:15:10.279355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.400 [2024-07-16 01:15:10.279577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.400 [2024-07-16 01:15:10.279606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.400 [2024-07-16 01:15:10.279623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.400 [2024-07-16 01:15:10.279645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.400 [2024-07-16 01:15:10.279666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.400 [2024-07-16 01:15:10.279680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.400 [2024-07-16 01:15:10.279695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.400 [2024-07-16 01:15:10.279726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.400 [2024-07-16 01:15:10.289427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.400 [2024-07-16 01:15:10.289684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.400 [2024-07-16 01:15:10.289712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.400 [2024-07-16 01:15:10.289728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.400 [2024-07-16 01:15:10.289750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.400 [2024-07-16 01:15:10.289784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.400 [2024-07-16 01:15:10.289803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.400 [2024-07-16 01:15:10.289817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.400 [2024-07-16 01:15:10.289837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.400 [2024-07-16 01:15:10.299495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.400 [2024-07-16 01:15:10.299753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.400 [2024-07-16 01:15:10.299781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.400 [2024-07-16 01:15:10.299798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.400 [2024-07-16 01:15:10.299820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.400 [2024-07-16 01:15:10.299853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.400 [2024-07-16 01:15:10.299877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.400 [2024-07-16 01:15:10.299892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.400 [2024-07-16 01:15:10.299912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.400 [2024-07-16 01:15:10.309563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:54.400 01:15:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:54.400 [2024-07-16 01:15:10.309759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.400 [2024-07-16 01:15:10.309788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d7e20 with addr=10.0.0.2, port=4420 00:21:54.400 [2024-07-16 01:15:10.309804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7e20 is same with the state(5) to be set 00:21:54.400 [2024-07-16 01:15:10.309826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7e20 (9): Bad file descriptor 00:21:54.400 [2024-07-16 01:15:10.309846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:54.400 [2024-07-16 01:15:10.309860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:54.400 [2024-07-16 01:15:10.309874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:54.400 [2024-07-16 01:15:10.309893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.400 [2024-07-16 01:15:10.312932] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:54.400 [2024-07-16 01:15:10.312983] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.329 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:55.586 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.587 01:15:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.959 [2024-07-16 01:15:12.606652] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:56.959 [2024-07-16 01:15:12.606690] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:56.960 [2024-07-16 01:15:12.606715] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:56.960 [2024-07-16 01:15:12.693964] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:56.960 [2024-07-16 01:15:12.801145] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:56.960 [2024-07-16 01:15:12.801189] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.960 request: 00:21:56.960 { 00:21:56.960 "name": "nvme", 00:21:56.960 "trtype": "tcp", 00:21:56.960 "traddr": "10.0.0.2", 00:21:56.960 "adrfam": "ipv4", 00:21:56.960 "trsvcid": "8009", 00:21:56.960 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:56.960 "wait_for_attach": true, 00:21:56.960 "method": "bdev_nvme_start_discovery", 00:21:56.960 "req_id": 1 00:21:56.960 } 00:21:56.960 Got JSON-RPC error response 00:21:56.960 response: 00:21:56.960 { 00:21:56.960 "code": -17, 00:21:56.960 "message": "File exists" 00:21:56.960 } 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.960 request: 00:21:56.960 { 00:21:56.960 "name": "nvme_second", 00:21:56.960 "trtype": "tcp", 00:21:56.960 "traddr": "10.0.0.2", 00:21:56.960 "adrfam": "ipv4", 00:21:56.960 "trsvcid": "8009", 00:21:56.960 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:56.960 "wait_for_attach": true, 00:21:56.960 "method": "bdev_nvme_start_discovery", 00:21:56.960 "req_id": 1 00:21:56.960 } 00:21:56.960 Got JSON-RPC error response 00:21:56.960 response: 00:21:56.960 { 00:21:56.960 "code": -17, 00:21:56.960 "message": "File exists" 00:21:56.960 } 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:56.960 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.218 01:15:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.148 [2024-07-16 01:15:14.004605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:58.148 [2024-07-16 01:15:14.004685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a22ac0 with addr=10.0.0.2, port=8010 00:21:58.148 [2024-07-16 01:15:14.004732] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:58.148 [2024-07-16 01:15:14.004748] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:58.148 [2024-07-16 01:15:14.004761] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:59.080 [2024-07-16 01:15:15.007102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.080 [2024-07-16 01:15:15.007167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a22ac0 with addr=10.0.0.2, port=8010 00:21:59.080 [2024-07-16 01:15:15.007198] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:59.080 [2024-07-16 01:15:15.007214] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:59.080 [2024-07-16 01:15:15.007227] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:00.456 [2024-07-16 01:15:16.009222] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:00.456 request: 00:22:00.456 { 00:22:00.456 "name": "nvme_second", 00:22:00.456 "trtype": "tcp", 00:22:00.456 "traddr": "10.0.0.2", 00:22:00.456 "adrfam": "ipv4", 00:22:00.456 "trsvcid": "8010", 00:22:00.456 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:00.456 "wait_for_attach": false, 00:22:00.456 "attach_timeout_ms": 3000, 00:22:00.456 "method": "bdev_nvme_start_discovery", 00:22:00.456 "req_id": 1 00:22:00.456 } 00:22:00.456 Got JSON-RPC error response 00:22:00.456 response: 00:22:00.456 { 00:22:00.456 "code": -110, 00:22:00.456 "message": "Connection timed out" 00:22:00.456 } 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 24323 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.456 rmmod nvme_tcp 00:22:00.456 rmmod nvme_fabrics 00:22:00.456 rmmod nvme_keyring 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 24288 ']' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 24288 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 24288 ']' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 24288 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 24288 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 24288' 00:22:00.456 killing process with pid 24288 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 24288 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 24288 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.456 01:15:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:03.025 00:22:03.025 real 0m14.239s 00:22:03.025 user 0m20.871s 00:22:03.025 sys 0m2.972s 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.025 ************************************ 00:22:03.025 END TEST nvmf_host_discovery 00:22:03.025 ************************************ 00:22:03.025 01:15:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:03.025 01:15:18 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:03.025 01:15:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:03.025 01:15:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.025 01:15:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:03.025 ************************************ 00:22:03.025 START TEST nvmf_host_multipath_status 00:22:03.025 ************************************ 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:03.025 * Looking for test storage... 00:22:03.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.025 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.026 01:15:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:04.926 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:04.926 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:04.926 Found net devices under 0000:09:00.0: cvl_0_0 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:04.926 Found net devices under 0000:09:00.1: cvl_0_1 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:22:04.926 00:22:04.926 --- 10.0.0.2 ping statistics --- 00:22:04.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.926 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:04.926 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:22:04.926 00:22:04.926 --- 10.0.0.1 ping statistics --- 00:22:04.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.926 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=28052 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 28052 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 28052 ']' 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.927 01:15:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:04.927 [2024-07-16 01:15:20.886191] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:22:04.927 [2024-07-16 01:15:20.886290] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.185 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.185 [2024-07-16 01:15:20.957488] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:05.185 [2024-07-16 01:15:21.067820] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.185 [2024-07-16 01:15:21.067876] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.185 [2024-07-16 01:15:21.067904] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.185 [2024-07-16 01:15:21.067915] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.185 [2024-07-16 01:15:21.067925] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.185 [2024-07-16 01:15:21.068017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.185 [2024-07-16 01:15:21.068024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=28052 00:22:06.118 01:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:06.375 [2024-07-16 01:15:22.137918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.375 01:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:06.633 Malloc0 00:22:06.633 01:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:06.891 01:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.149 01:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.406 [2024-07-16 01:15:23.226903] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.406 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:07.664 [2024-07-16 01:15:23.467516] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=28405 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 28405 /var/tmp/bdevperf.sock 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 28405 ']' 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.664 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.921 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.921 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:07.921 01:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:08.177 01:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:08.454 Nvme0n1 00:22:08.455 01:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:09.019 Nvme0n1 00:22:09.019 01:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:09.019 01:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:10.926 01:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:10.926 01:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:11.183 01:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:11.746 01:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:12.679 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:12.679 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:12.679 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.679 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:12.936 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.936 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:12.936 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.936 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.194 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.194 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.194 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.194 01:15:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.451 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.451 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.451 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.451 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:13.710 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.710 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:13.710 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.710 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:13.968 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.968 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:13.968 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.968 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.226 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.226 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:14.226 01:15:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:14.484 01:15:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:14.742 01:15:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:15.676 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:15.676 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:15.676 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.676 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:15.934 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.934 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:15.934 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.934 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.192 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.192 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.192 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.192 01:15:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:16.450 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.450 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:16.450 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.450 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:16.708 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.708 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:16.708 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.708 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:16.965 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.965 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:16.965 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.965 01:15:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.223 01:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.223 01:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:17.223 01:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:17.481 01:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:17.738 01:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:18.669 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:18.669 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:18.669 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.669 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:18.926 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.926 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:18.926 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.926 01:15:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.184 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.184 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.184 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.184 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:19.442 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.442 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:19.442 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.442 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:19.701 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.701 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:19.701 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.701 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:19.959 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.959 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:19.959 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.959 01:15:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.217 01:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.217 01:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:20.217 01:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:20.475 01:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:20.764 01:15:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:21.698 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:21.698 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:21.698 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.698 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:21.957 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.957 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:21.957 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.957 01:15:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:22.215 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.215 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:22.215 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.215 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:22.473 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.473 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:22.473 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.473 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:22.730 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.730 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:22.730 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.730 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.989 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.989 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:22.989 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.989 01:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:23.247 01:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.247 01:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:23.247 01:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:23.505 01:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:23.763 01:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:24.696 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:24.696 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:24.696 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.696 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.954 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.954 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:24.954 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.954 01:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.212 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.212 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.212 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.212 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.470 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.470 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.470 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.470 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:25.727 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.727 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:25.727 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.727 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.985 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.985 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:25.985 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.985 01:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.242 01:15:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.242 01:15:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:26.242 01:15:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:26.499 01:15:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:26.756 01:15:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:27.687 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:27.687 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:27.687 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.687 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.944 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.944 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:27.944 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.944 01:15:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.202 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.202 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.202 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.202 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.459 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.459 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.459 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.459 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.715 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.715 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:28.715 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.715 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.972 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.972 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.972 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.972 01:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.229 01:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.229 01:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:29.487 01:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:29.487 01:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:29.744 01:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:30.002 01:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:30.932 01:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:30.932 01:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:30.932 01:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.932 01:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.190 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.190 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:31.190 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.190 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.447 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.447 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.447 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.447 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.703 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.703 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.703 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.703 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.960 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.960 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.960 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.960 01:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.217 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.217 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:32.217 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.217 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.474 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.474 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:32.474 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:32.730 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:32.987 01:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:34.359 01:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:34.359 01:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:34.359 01:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.359 01:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.359 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.359 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:34.359 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.359 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.616 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.616 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.616 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.616 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.873 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.873 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.873 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.873 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:35.131 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.131 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:35.131 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.131 01:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:35.389 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.389 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:35.389 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.389 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.646 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.646 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:35.646 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:35.904 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:36.161 01:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:37.093 01:15:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:37.093 01:15:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:37.093 01:15:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.093 01:15:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.351 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.351 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:37.351 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.351 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.608 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.608 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.608 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.608 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.896 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.896 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.896 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.896 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:38.153 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.153 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:38.153 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.153 01:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.411 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.411 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.411 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.411 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.668 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.668 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:38.668 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:38.926 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:39.184 01:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:40.117 01:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:40.117 01:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:40.117 01:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.117 01:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.375 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.375 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:40.375 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.375 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:40.632 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.633 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:40.633 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.633 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:40.891 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.891 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:40.891 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.891 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.149 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.149 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.149 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.149 01:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.407 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.407 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:41.407 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.407 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 28405 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 28405 ']' 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 28405 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 28405 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 28405' 00:22:41.665 killing process with pid 28405 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 28405 00:22:41.665 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 28405 00:22:41.665 Connection closed with partial response: 00:22:41.665 00:22:41.665 00:22:41.926 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 28405 00:22:41.926 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:41.926 [2024-07-16 01:15:23.525065] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:22:41.926 [2024-07-16 01:15:23.525149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid28405 ] 00:22:41.926 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.926 [2024-07-16 01:15:23.584059] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.926 [2024-07-16 01:15:23.691274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.926 Running I/O for 90 seconds... 00:22:41.926 [2024-07-16 01:15:39.286343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.286954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.286993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.287018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.926 [2024-07-16 01:15:39.287035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:41.926 [2024-07-16 01:15:39.287058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.287074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.287097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.287114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.287152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.287169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.287192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.287209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.288050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.288098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.288141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.288182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.288224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.288265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.288936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.288952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.927 [2024-07-16 01:15:39.289021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.927 [2024-07-16 01:15:39.289699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.927 [2024-07-16 01:15:39.289715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.289739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.289755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.289780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.289798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.289823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.289839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.289864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.289881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.289905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.289921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.289967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.289991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.290964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.290985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.928 [2024-07-16 01:15:39.291691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.928 [2024-07-16 01:15:39.291707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.291737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.291754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.291781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.291796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.291823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.291839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.291865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.291882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.291908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.291924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.291950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.291989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:39.292405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:39.292447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.292929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.292981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.293001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.293030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.293047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.293074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.293090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.293118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.293134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:39.293162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:39.293178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.953591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:54.953655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.953721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:54.953744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.953769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:54.953787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.953809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-07-16 01:15:54.953827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.953851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:54.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.953902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:54.953934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.953964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:54.953997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.954020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:54.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.954056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:54.954071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:41.929 [2024-07-16 01:15:54.954092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.929 [2024-07-16 01:15:54.954108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.954128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.954144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.954165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.954181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.954202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-07-16 01:15:54.954234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.954257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-07-16 01:15:54.954272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.956299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.956347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.956402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.956463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.956503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-07-16 01:15:54.956541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-07-16 01:15:54.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-07-16 01:15:54.956615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-07-16 01:15:54.956652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.956673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-07-16 01:15:54.956688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.930 [2024-07-16 01:15:54.958590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.930 [2024-07-16 01:15:54.958606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.930 Received shutdown signal, test time was about 32.513480 seconds 00:22:41.930 00:22:41.930 Latency(us) 00:22:41.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.930 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.930 Verification LBA range: start 0x0 length 0x4000 00:22:41.930 Nvme0n1 : 32.51 8192.31 32.00 0.00 0.00 15597.50 524.89 4026531.84 00:22:41.930 =================================================================================================================== 00:22:41.930 Total : 8192.31 32.00 0.00 0.00 15597.50 524.89 4026531.84 00:22:41.930 01:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.188 rmmod nvme_tcp 00:22:42.188 rmmod nvme_fabrics 00:22:42.188 rmmod nvme_keyring 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 28052 ']' 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 28052 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 28052 ']' 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 28052 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 28052 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 28052' 00:22:42.188 killing process with pid 28052 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 28052 00:22:42.188 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 28052 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.448 01:15:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.986 01:16:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.986 00:22:44.986 real 0m41.964s 00:22:44.986 user 2m3.242s 00:22:44.986 sys 0m11.603s 00:22:44.986 01:16:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.986 01:16:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:44.986 ************************************ 00:22:44.986 END TEST nvmf_host_multipath_status 00:22:44.986 ************************************ 00:22:44.986 01:16:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:44.986 01:16:00 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:44.986 01:16:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.986 01:16:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.986 01:16:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.986 ************************************ 00:22:44.986 START TEST nvmf_discovery_remove_ifc 00:22:44.986 ************************************ 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:44.986 * Looking for test storage... 00:22:44.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.986 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.987 01:16:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.889 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:46.890 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:46.890 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:46.890 Found net devices under 0000:09:00.0: cvl_0_0 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:46.890 Found net devices under 0000:09:00.1: cvl_0_1 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:22:46.890 00:22:46.890 --- 10.0.0.2 ping statistics --- 00:22:46.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.890 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:22:46.890 00:22:46.890 --- 10.0.0.1 ping statistics --- 00:22:46.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.890 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=34600 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 34600 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 34600 ']' 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.890 01:16:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.148 [2024-07-16 01:16:02.898745] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:22:47.148 [2024-07-16 01:16:02.898847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.148 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.148 [2024-07-16 01:16:02.964544] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.148 [2024-07-16 01:16:03.066907] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.148 [2024-07-16 01:16:03.066989] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.148 [2024-07-16 01:16:03.067010] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.148 [2024-07-16 01:16:03.067021] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.148 [2024-07-16 01:16:03.067031] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.148 [2024-07-16 01:16:03.067061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.405 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.405 [2024-07-16 01:16:03.204461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.406 [2024-07-16 01:16:03.212621] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:47.406 null0 00:22:47.406 [2024-07-16 01:16:03.244592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=34628 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 34628 /tmp/host.sock 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 34628 ']' 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:47.406 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.406 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.406 [2024-07-16 01:16:03.307671] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:22:47.406 [2024-07-16 01:16:03.307738] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34628 ] 00:22:47.406 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.406 [2024-07-16 01:16:03.364643] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.663 [2024-07-16 01:16:03.470229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.663 01:16:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.032 [2024-07-16 01:16:04.621578] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:49.032 [2024-07-16 01:16:04.621601] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:49.032 [2024-07-16 01:16:04.621626] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:49.032 [2024-07-16 01:16:04.707926] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:49.032 [2024-07-16 01:16:04.805694] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:49.032 [2024-07-16 01:16:04.805758] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:49.032 [2024-07-16 01:16:04.805797] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:49.032 [2024-07-16 01:16:04.805832] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:49.032 [2024-07-16 01:16:04.805860] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.032 [2024-07-16 01:16:04.810260] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6f99d0 was disconnected and freed. delete nvme_qpair. 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.032 01:16:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:49.964 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.964 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.964 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.964 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.964 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.964 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.964 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.222 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.222 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:50.222 01:16:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.154 01:16:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.154 01:16:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.154 01:16:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.154 01:16:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.154 01:16:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.154 01:16:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.154 01:16:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.154 01:16:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.154 01:16:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:51.155 01:16:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.088 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.345 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:52.345 01:16:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:53.279 01:16:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:54.213 01:16:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.471 [2024-07-16 01:16:10.247214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:54.471 [2024-07-16 01:16:10.247296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.471 [2024-07-16 01:16:10.247331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.471 [2024-07-16 01:16:10.247350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.471 [2024-07-16 01:16:10.247363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.471 [2024-07-16 01:16:10.247378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.471 [2024-07-16 01:16:10.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.471 [2024-07-16 01:16:10.247404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.471 [2024-07-16 01:16:10.247416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.471 [2024-07-16 01:16:10.247429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.471 [2024-07-16 01:16:10.247442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.471 [2024-07-16 01:16:10.247454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c04e0 is same with the state(5) to be set 00:22:54.472 [2024-07-16 01:16:10.257247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c04e0 (9): Bad file descriptor 00:22:54.472 [2024-07-16 01:16:10.267294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.404 [2024-07-16 01:16:11.290021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:55.404 [2024-07-16 01:16:11.290093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c04e0 with addr=10.0.0.2, port=4420 00:22:55.404 [2024-07-16 01:16:11.290120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c04e0 is same with the state(5) to be set 00:22:55.404 [2024-07-16 01:16:11.290175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c04e0 (9): Bad file descriptor 00:22:55.404 [2024-07-16 01:16:11.290641] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.404 [2024-07-16 01:16:11.290671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:55.404 [2024-07-16 01:16:11.290686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:55.404 [2024-07-16 01:16:11.290702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:55.404 [2024-07-16 01:16:11.290730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.404 [2024-07-16 01:16:11.290748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:55.404 01:16:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:56.337 [2024-07-16 01:16:12.293252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:56.337 [2024-07-16 01:16:12.293279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:56.337 [2024-07-16 01:16:12.293292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:56.337 [2024-07-16 01:16:12.293304] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:56.337 [2024-07-16 01:16:12.293323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.337 [2024-07-16 01:16:12.293358] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:56.337 [2024-07-16 01:16:12.293408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.337 [2024-07-16 01:16:12.293429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.337 [2024-07-16 01:16:12.293449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.337 [2024-07-16 01:16:12.293463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.337 [2024-07-16 01:16:12.293476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.337 [2024-07-16 01:16:12.293499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.337 [2024-07-16 01:16:12.293514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.337 [2024-07-16 01:16:12.293531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.338 [2024-07-16 01:16:12.293545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.338 [2024-07-16 01:16:12.293558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.338 [2024-07-16 01:16:12.293572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:56.338 [2024-07-16 01:16:12.293717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bf960 (9): Bad file descriptor 00:22:56.338 [2024-07-16 01:16:12.294727] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:56.338 [2024-07-16 01:16:12.294750] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.338 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:56.595 01:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:57.553 01:16:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.486 [2024-07-16 01:16:14.349172] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:58.486 [2024-07-16 01:16:14.349202] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:58.486 [2024-07-16 01:16:14.349229] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.486 [2024-07-16 01:16:14.436519] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:58.486 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.486 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.486 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.486 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.486 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.486 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.486 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.744 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.744 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:58.744 01:16:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.744 [2024-07-16 01:16:14.660997] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:58.744 [2024-07-16 01:16:14.661072] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:58.744 [2024-07-16 01:16:14.661108] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:58.744 [2024-07-16 01:16:14.661132] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:58.744 [2024-07-16 01:16:14.661149] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:58.744 [2024-07-16 01:16:14.666400] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7032a0 was disconnected and freed. delete nvme_qpair. 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:59.678 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 34628 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 34628 ']' 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 34628 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 34628 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 34628' 00:22:59.679 killing process with pid 34628 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 34628 00:22:59.679 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 34628 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.936 rmmod nvme_tcp 00:22:59.936 rmmod nvme_fabrics 00:22:59.936 rmmod nvme_keyring 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 34600 ']' 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 34600 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 34600 ']' 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 34600 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.936 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 34600 00:23:00.194 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:00.194 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:00.194 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 34600' 00:23:00.194 killing process with pid 34600 00:23:00.194 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 34600 00:23:00.194 01:16:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 34600 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.452 01:16:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.379 01:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.379 00:23:02.379 real 0m17.729s 00:23:02.379 user 0m25.592s 00:23:02.379 sys 0m3.025s 00:23:02.379 01:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:02.379 01:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.379 ************************************ 00:23:02.379 END TEST nvmf_discovery_remove_ifc 00:23:02.379 ************************************ 00:23:02.379 01:16:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:02.379 01:16:18 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:02.379 01:16:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:02.379 01:16:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.379 01:16:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.379 ************************************ 00:23:02.379 START TEST nvmf_identify_kernel_target 00:23:02.379 ************************************ 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:02.379 * Looking for test storage... 00:23:02.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.379 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.380 01:16:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:04.912 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:04.912 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:04.912 Found net devices under 0000:09:00.0: cvl_0_0 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:04.912 Found net devices under 0000:09:00.1: cvl_0_1 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.912 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:23:04.913 00:23:04.913 --- 10.0.0.2 ping statistics --- 00:23:04.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.913 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:23:04.913 00:23:04.913 --- 10.0.0.1 ping statistics --- 00:23:04.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.913 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:04.913 01:16:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:05.846 Waiting for block devices as requested 00:23:05.846 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:05.846 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:06.103 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:06.103 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:06.103 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:06.362 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:06.362 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:06.362 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:06.362 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:06.644 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:06.644 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:06.644 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:06.903 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:06.903 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:06.903 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:06.903 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:07.162 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:07.162 No valid GPT data, bailing 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:07.162 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:07.424 00:23:07.424 Discovery Log Number of Records 2, Generation counter 2 00:23:07.424 =====Discovery Log Entry 0====== 00:23:07.424 trtype: tcp 00:23:07.424 adrfam: ipv4 00:23:07.424 subtype: current discovery subsystem 00:23:07.424 treq: not specified, sq flow control disable supported 00:23:07.424 portid: 1 00:23:07.424 trsvcid: 4420 00:23:07.424 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:07.424 traddr: 10.0.0.1 00:23:07.424 eflags: none 00:23:07.424 sectype: none 00:23:07.424 =====Discovery Log Entry 1====== 00:23:07.424 trtype: tcp 00:23:07.424 adrfam: ipv4 00:23:07.424 subtype: nvme subsystem 00:23:07.424 treq: not specified, sq flow control disable supported 00:23:07.424 portid: 1 00:23:07.424 trsvcid: 4420 00:23:07.424 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:07.424 traddr: 10.0.0.1 00:23:07.424 eflags: none 00:23:07.424 sectype: none 00:23:07.424 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:07.424 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:07.424 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.424 ===================================================== 00:23:07.424 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:07.424 ===================================================== 00:23:07.424 Controller Capabilities/Features 00:23:07.424 ================================ 00:23:07.424 Vendor ID: 0000 00:23:07.424 Subsystem Vendor ID: 0000 00:23:07.424 Serial Number: 4704c36c2d235618da92 00:23:07.424 Model Number: Linux 00:23:07.424 Firmware Version: 6.7.0-68 00:23:07.424 Recommended Arb Burst: 0 00:23:07.424 IEEE OUI Identifier: 00 00 00 00:23:07.424 Multi-path I/O 00:23:07.424 May have multiple subsystem ports: No 00:23:07.424 May have multiple controllers: No 00:23:07.424 Associated with SR-IOV VF: No 00:23:07.424 Max Data Transfer Size: Unlimited 00:23:07.424 Max Number of Namespaces: 0 00:23:07.424 Max Number of I/O Queues: 1024 00:23:07.424 NVMe Specification Version (VS): 1.3 00:23:07.424 NVMe Specification Version (Identify): 1.3 00:23:07.424 Maximum Queue Entries: 1024 00:23:07.424 Contiguous Queues Required: No 00:23:07.424 Arbitration Mechanisms Supported 00:23:07.424 Weighted Round Robin: Not Supported 00:23:07.424 Vendor Specific: Not Supported 00:23:07.424 Reset Timeout: 7500 ms 00:23:07.424 Doorbell Stride: 4 bytes 00:23:07.424 NVM Subsystem Reset: Not Supported 00:23:07.424 Command Sets Supported 00:23:07.424 NVM Command Set: Supported 00:23:07.424 Boot Partition: Not Supported 00:23:07.424 Memory Page Size Minimum: 4096 bytes 00:23:07.424 Memory Page Size Maximum: 4096 bytes 00:23:07.424 Persistent Memory Region: Not Supported 00:23:07.424 Optional Asynchronous Events Supported 00:23:07.424 Namespace Attribute Notices: Not Supported 00:23:07.424 Firmware Activation Notices: Not Supported 00:23:07.424 ANA Change Notices: Not Supported 00:23:07.424 PLE Aggregate Log Change Notices: Not Supported 00:23:07.424 LBA Status Info Alert Notices: Not Supported 00:23:07.424 EGE Aggregate Log Change Notices: Not Supported 00:23:07.424 Normal NVM Subsystem Shutdown event: Not Supported 00:23:07.424 Zone Descriptor Change Notices: Not Supported 00:23:07.424 Discovery Log Change Notices: Supported 00:23:07.424 Controller Attributes 00:23:07.424 128-bit Host Identifier: Not Supported 00:23:07.424 Non-Operational Permissive Mode: Not Supported 00:23:07.424 NVM Sets: Not Supported 00:23:07.424 Read Recovery Levels: Not Supported 00:23:07.424 Endurance Groups: Not Supported 00:23:07.424 Predictable Latency Mode: Not Supported 00:23:07.424 Traffic Based Keep ALive: Not Supported 00:23:07.424 Namespace Granularity: Not Supported 00:23:07.424 SQ Associations: Not Supported 00:23:07.424 UUID List: Not Supported 00:23:07.424 Multi-Domain Subsystem: Not Supported 00:23:07.424 Fixed Capacity Management: Not Supported 00:23:07.424 Variable Capacity Management: Not Supported 00:23:07.424 Delete Endurance Group: Not Supported 00:23:07.424 Delete NVM Set: Not Supported 00:23:07.424 Extended LBA Formats Supported: Not Supported 00:23:07.424 Flexible Data Placement Supported: Not Supported 00:23:07.424 00:23:07.424 Controller Memory Buffer Support 00:23:07.424 ================================ 00:23:07.424 Supported: No 00:23:07.424 00:23:07.424 Persistent Memory Region Support 00:23:07.424 ================================ 00:23:07.424 Supported: No 00:23:07.424 00:23:07.424 Admin Command Set Attributes 00:23:07.424 ============================ 00:23:07.424 Security Send/Receive: Not Supported 00:23:07.424 Format NVM: Not Supported 00:23:07.424 Firmware Activate/Download: Not Supported 00:23:07.424 Namespace Management: Not Supported 00:23:07.424 Device Self-Test: Not Supported 00:23:07.424 Directives: Not Supported 00:23:07.424 NVMe-MI: Not Supported 00:23:07.424 Virtualization Management: Not Supported 00:23:07.424 Doorbell Buffer Config: Not Supported 00:23:07.424 Get LBA Status Capability: Not Supported 00:23:07.424 Command & Feature Lockdown Capability: Not Supported 00:23:07.424 Abort Command Limit: 1 00:23:07.424 Async Event Request Limit: 1 00:23:07.424 Number of Firmware Slots: N/A 00:23:07.424 Firmware Slot 1 Read-Only: N/A 00:23:07.424 Firmware Activation Without Reset: N/A 00:23:07.424 Multiple Update Detection Support: N/A 00:23:07.424 Firmware Update Granularity: No Information Provided 00:23:07.424 Per-Namespace SMART Log: No 00:23:07.424 Asymmetric Namespace Access Log Page: Not Supported 00:23:07.424 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:07.424 Command Effects Log Page: Not Supported 00:23:07.425 Get Log Page Extended Data: Supported 00:23:07.425 Telemetry Log Pages: Not Supported 00:23:07.425 Persistent Event Log Pages: Not Supported 00:23:07.425 Supported Log Pages Log Page: May Support 00:23:07.425 Commands Supported & Effects Log Page: Not Supported 00:23:07.425 Feature Identifiers & Effects Log Page:May Support 00:23:07.425 NVMe-MI Commands & Effects Log Page: May Support 00:23:07.425 Data Area 4 for Telemetry Log: Not Supported 00:23:07.425 Error Log Page Entries Supported: 1 00:23:07.425 Keep Alive: Not Supported 00:23:07.425 00:23:07.425 NVM Command Set Attributes 00:23:07.425 ========================== 00:23:07.425 Submission Queue Entry Size 00:23:07.425 Max: 1 00:23:07.425 Min: 1 00:23:07.425 Completion Queue Entry Size 00:23:07.425 Max: 1 00:23:07.425 Min: 1 00:23:07.425 Number of Namespaces: 0 00:23:07.425 Compare Command: Not Supported 00:23:07.425 Write Uncorrectable Command: Not Supported 00:23:07.425 Dataset Management Command: Not Supported 00:23:07.425 Write Zeroes Command: Not Supported 00:23:07.425 Set Features Save Field: Not Supported 00:23:07.425 Reservations: Not Supported 00:23:07.425 Timestamp: Not Supported 00:23:07.425 Copy: Not Supported 00:23:07.425 Volatile Write Cache: Not Present 00:23:07.425 Atomic Write Unit (Normal): 1 00:23:07.425 Atomic Write Unit (PFail): 1 00:23:07.425 Atomic Compare & Write Unit: 1 00:23:07.425 Fused Compare & Write: Not Supported 00:23:07.425 Scatter-Gather List 00:23:07.425 SGL Command Set: Supported 00:23:07.425 SGL Keyed: Not Supported 00:23:07.425 SGL Bit Bucket Descriptor: Not Supported 00:23:07.425 SGL Metadata Pointer: Not Supported 00:23:07.425 Oversized SGL: Not Supported 00:23:07.425 SGL Metadata Address: Not Supported 00:23:07.425 SGL Offset: Supported 00:23:07.425 Transport SGL Data Block: Not Supported 00:23:07.425 Replay Protected Memory Block: Not Supported 00:23:07.425 00:23:07.425 Firmware Slot Information 00:23:07.425 ========================= 00:23:07.425 Active slot: 0 00:23:07.425 00:23:07.425 00:23:07.425 Error Log 00:23:07.425 ========= 00:23:07.425 00:23:07.425 Active Namespaces 00:23:07.425 ================= 00:23:07.425 Discovery Log Page 00:23:07.425 ================== 00:23:07.425 Generation Counter: 2 00:23:07.425 Number of Records: 2 00:23:07.425 Record Format: 0 00:23:07.425 00:23:07.425 Discovery Log Entry 0 00:23:07.425 ---------------------- 00:23:07.425 Transport Type: 3 (TCP) 00:23:07.425 Address Family: 1 (IPv4) 00:23:07.425 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:07.425 Entry Flags: 00:23:07.425 Duplicate Returned Information: 0 00:23:07.425 Explicit Persistent Connection Support for Discovery: 0 00:23:07.425 Transport Requirements: 00:23:07.425 Secure Channel: Not Specified 00:23:07.425 Port ID: 1 (0x0001) 00:23:07.425 Controller ID: 65535 (0xffff) 00:23:07.425 Admin Max SQ Size: 32 00:23:07.425 Transport Service Identifier: 4420 00:23:07.425 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:07.425 Transport Address: 10.0.0.1 00:23:07.425 Discovery Log Entry 1 00:23:07.425 ---------------------- 00:23:07.425 Transport Type: 3 (TCP) 00:23:07.425 Address Family: 1 (IPv4) 00:23:07.425 Subsystem Type: 2 (NVM Subsystem) 00:23:07.425 Entry Flags: 00:23:07.425 Duplicate Returned Information: 0 00:23:07.425 Explicit Persistent Connection Support for Discovery: 0 00:23:07.425 Transport Requirements: 00:23:07.425 Secure Channel: Not Specified 00:23:07.425 Port ID: 1 (0x0001) 00:23:07.425 Controller ID: 65535 (0xffff) 00:23:07.425 Admin Max SQ Size: 32 00:23:07.425 Transport Service Identifier: 4420 00:23:07.425 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:07.425 Transport Address: 10.0.0.1 00:23:07.425 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:07.425 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.682 get_feature(0x01) failed 00:23:07.682 get_feature(0x02) failed 00:23:07.682 get_feature(0x04) failed 00:23:07.682 ===================================================== 00:23:07.682 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:07.682 ===================================================== 00:23:07.682 Controller Capabilities/Features 00:23:07.682 ================================ 00:23:07.682 Vendor ID: 0000 00:23:07.682 Subsystem Vendor ID: 0000 00:23:07.682 Serial Number: c8187d6371b861682806 00:23:07.682 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:07.682 Firmware Version: 6.7.0-68 00:23:07.682 Recommended Arb Burst: 6 00:23:07.682 IEEE OUI Identifier: 00 00 00 00:23:07.682 Multi-path I/O 00:23:07.682 May have multiple subsystem ports: Yes 00:23:07.682 May have multiple controllers: Yes 00:23:07.683 Associated with SR-IOV VF: No 00:23:07.683 Max Data Transfer Size: Unlimited 00:23:07.683 Max Number of Namespaces: 1024 00:23:07.683 Max Number of I/O Queues: 128 00:23:07.683 NVMe Specification Version (VS): 1.3 00:23:07.683 NVMe Specification Version (Identify): 1.3 00:23:07.683 Maximum Queue Entries: 1024 00:23:07.683 Contiguous Queues Required: No 00:23:07.683 Arbitration Mechanisms Supported 00:23:07.683 Weighted Round Robin: Not Supported 00:23:07.683 Vendor Specific: Not Supported 00:23:07.683 Reset Timeout: 7500 ms 00:23:07.683 Doorbell Stride: 4 bytes 00:23:07.683 NVM Subsystem Reset: Not Supported 00:23:07.683 Command Sets Supported 00:23:07.683 NVM Command Set: Supported 00:23:07.683 Boot Partition: Not Supported 00:23:07.683 Memory Page Size Minimum: 4096 bytes 00:23:07.683 Memory Page Size Maximum: 4096 bytes 00:23:07.683 Persistent Memory Region: Not Supported 00:23:07.683 Optional Asynchronous Events Supported 00:23:07.683 Namespace Attribute Notices: Supported 00:23:07.683 Firmware Activation Notices: Not Supported 00:23:07.683 ANA Change Notices: Supported 00:23:07.683 PLE Aggregate Log Change Notices: Not Supported 00:23:07.683 LBA Status Info Alert Notices: Not Supported 00:23:07.683 EGE Aggregate Log Change Notices: Not Supported 00:23:07.683 Normal NVM Subsystem Shutdown event: Not Supported 00:23:07.683 Zone Descriptor Change Notices: Not Supported 00:23:07.683 Discovery Log Change Notices: Not Supported 00:23:07.683 Controller Attributes 00:23:07.683 128-bit Host Identifier: Supported 00:23:07.683 Non-Operational Permissive Mode: Not Supported 00:23:07.683 NVM Sets: Not Supported 00:23:07.683 Read Recovery Levels: Not Supported 00:23:07.683 Endurance Groups: Not Supported 00:23:07.683 Predictable Latency Mode: Not Supported 00:23:07.683 Traffic Based Keep ALive: Supported 00:23:07.683 Namespace Granularity: Not Supported 00:23:07.683 SQ Associations: Not Supported 00:23:07.683 UUID List: Not Supported 00:23:07.683 Multi-Domain Subsystem: Not Supported 00:23:07.683 Fixed Capacity Management: Not Supported 00:23:07.683 Variable Capacity Management: Not Supported 00:23:07.683 Delete Endurance Group: Not Supported 00:23:07.683 Delete NVM Set: Not Supported 00:23:07.683 Extended LBA Formats Supported: Not Supported 00:23:07.683 Flexible Data Placement Supported: Not Supported 00:23:07.683 00:23:07.683 Controller Memory Buffer Support 00:23:07.683 ================================ 00:23:07.683 Supported: No 00:23:07.683 00:23:07.683 Persistent Memory Region Support 00:23:07.683 ================================ 00:23:07.683 Supported: No 00:23:07.683 00:23:07.683 Admin Command Set Attributes 00:23:07.683 ============================ 00:23:07.683 Security Send/Receive: Not Supported 00:23:07.683 Format NVM: Not Supported 00:23:07.683 Firmware Activate/Download: Not Supported 00:23:07.683 Namespace Management: Not Supported 00:23:07.683 Device Self-Test: Not Supported 00:23:07.683 Directives: Not Supported 00:23:07.683 NVMe-MI: Not Supported 00:23:07.683 Virtualization Management: Not Supported 00:23:07.683 Doorbell Buffer Config: Not Supported 00:23:07.683 Get LBA Status Capability: Not Supported 00:23:07.683 Command & Feature Lockdown Capability: Not Supported 00:23:07.683 Abort Command Limit: 4 00:23:07.683 Async Event Request Limit: 4 00:23:07.683 Number of Firmware Slots: N/A 00:23:07.683 Firmware Slot 1 Read-Only: N/A 00:23:07.683 Firmware Activation Without Reset: N/A 00:23:07.683 Multiple Update Detection Support: N/A 00:23:07.683 Firmware Update Granularity: No Information Provided 00:23:07.683 Per-Namespace SMART Log: Yes 00:23:07.683 Asymmetric Namespace Access Log Page: Supported 00:23:07.683 ANA Transition Time : 10 sec 00:23:07.683 00:23:07.683 Asymmetric Namespace Access Capabilities 00:23:07.683 ANA Optimized State : Supported 00:23:07.683 ANA Non-Optimized State : Supported 00:23:07.683 ANA Inaccessible State : Supported 00:23:07.683 ANA Persistent Loss State : Supported 00:23:07.683 ANA Change State : Supported 00:23:07.683 ANAGRPID is not changed : No 00:23:07.683 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:07.683 00:23:07.683 ANA Group Identifier Maximum : 128 00:23:07.683 Number of ANA Group Identifiers : 128 00:23:07.683 Max Number of Allowed Namespaces : 1024 00:23:07.683 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:07.683 Command Effects Log Page: Supported 00:23:07.683 Get Log Page Extended Data: Supported 00:23:07.683 Telemetry Log Pages: Not Supported 00:23:07.683 Persistent Event Log Pages: Not Supported 00:23:07.683 Supported Log Pages Log Page: May Support 00:23:07.683 Commands Supported & Effects Log Page: Not Supported 00:23:07.683 Feature Identifiers & Effects Log Page:May Support 00:23:07.683 NVMe-MI Commands & Effects Log Page: May Support 00:23:07.683 Data Area 4 for Telemetry Log: Not Supported 00:23:07.683 Error Log Page Entries Supported: 128 00:23:07.683 Keep Alive: Supported 00:23:07.683 Keep Alive Granularity: 1000 ms 00:23:07.683 00:23:07.683 NVM Command Set Attributes 00:23:07.683 ========================== 00:23:07.683 Submission Queue Entry Size 00:23:07.683 Max: 64 00:23:07.683 Min: 64 00:23:07.683 Completion Queue Entry Size 00:23:07.683 Max: 16 00:23:07.683 Min: 16 00:23:07.683 Number of Namespaces: 1024 00:23:07.683 Compare Command: Not Supported 00:23:07.683 Write Uncorrectable Command: Not Supported 00:23:07.683 Dataset Management Command: Supported 00:23:07.683 Write Zeroes Command: Supported 00:23:07.683 Set Features Save Field: Not Supported 00:23:07.683 Reservations: Not Supported 00:23:07.683 Timestamp: Not Supported 00:23:07.683 Copy: Not Supported 00:23:07.683 Volatile Write Cache: Present 00:23:07.683 Atomic Write Unit (Normal): 1 00:23:07.683 Atomic Write Unit (PFail): 1 00:23:07.683 Atomic Compare & Write Unit: 1 00:23:07.683 Fused Compare & Write: Not Supported 00:23:07.683 Scatter-Gather List 00:23:07.683 SGL Command Set: Supported 00:23:07.683 SGL Keyed: Not Supported 00:23:07.683 SGL Bit Bucket Descriptor: Not Supported 00:23:07.683 SGL Metadata Pointer: Not Supported 00:23:07.683 Oversized SGL: Not Supported 00:23:07.683 SGL Metadata Address: Not Supported 00:23:07.683 SGL Offset: Supported 00:23:07.683 Transport SGL Data Block: Not Supported 00:23:07.683 Replay Protected Memory Block: Not Supported 00:23:07.683 00:23:07.683 Firmware Slot Information 00:23:07.683 ========================= 00:23:07.683 Active slot: 0 00:23:07.683 00:23:07.683 Asymmetric Namespace Access 00:23:07.683 =========================== 00:23:07.683 Change Count : 0 00:23:07.683 Number of ANA Group Descriptors : 1 00:23:07.683 ANA Group Descriptor : 0 00:23:07.683 ANA Group ID : 1 00:23:07.683 Number of NSID Values : 1 00:23:07.683 Change Count : 0 00:23:07.683 ANA State : 1 00:23:07.683 Namespace Identifier : 1 00:23:07.683 00:23:07.683 Commands Supported and Effects 00:23:07.683 ============================== 00:23:07.683 Admin Commands 00:23:07.683 -------------- 00:23:07.683 Get Log Page (02h): Supported 00:23:07.683 Identify (06h): Supported 00:23:07.683 Abort (08h): Supported 00:23:07.683 Set Features (09h): Supported 00:23:07.683 Get Features (0Ah): Supported 00:23:07.683 Asynchronous Event Request (0Ch): Supported 00:23:07.683 Keep Alive (18h): Supported 00:23:07.683 I/O Commands 00:23:07.683 ------------ 00:23:07.683 Flush (00h): Supported 00:23:07.683 Write (01h): Supported LBA-Change 00:23:07.683 Read (02h): Supported 00:23:07.683 Write Zeroes (08h): Supported LBA-Change 00:23:07.683 Dataset Management (09h): Supported 00:23:07.683 00:23:07.683 Error Log 00:23:07.683 ========= 00:23:07.683 Entry: 0 00:23:07.683 Error Count: 0x3 00:23:07.683 Submission Queue Id: 0x0 00:23:07.683 Command Id: 0x5 00:23:07.683 Phase Bit: 0 00:23:07.683 Status Code: 0x2 00:23:07.683 Status Code Type: 0x0 00:23:07.683 Do Not Retry: 1 00:23:07.683 Error Location: 0x28 00:23:07.683 LBA: 0x0 00:23:07.683 Namespace: 0x0 00:23:07.683 Vendor Log Page: 0x0 00:23:07.683 ----------- 00:23:07.683 Entry: 1 00:23:07.683 Error Count: 0x2 00:23:07.683 Submission Queue Id: 0x0 00:23:07.683 Command Id: 0x5 00:23:07.683 Phase Bit: 0 00:23:07.683 Status Code: 0x2 00:23:07.683 Status Code Type: 0x0 00:23:07.683 Do Not Retry: 1 00:23:07.683 Error Location: 0x28 00:23:07.683 LBA: 0x0 00:23:07.683 Namespace: 0x0 00:23:07.683 Vendor Log Page: 0x0 00:23:07.683 ----------- 00:23:07.683 Entry: 2 00:23:07.683 Error Count: 0x1 00:23:07.683 Submission Queue Id: 0x0 00:23:07.683 Command Id: 0x4 00:23:07.683 Phase Bit: 0 00:23:07.683 Status Code: 0x2 00:23:07.683 Status Code Type: 0x0 00:23:07.683 Do Not Retry: 1 00:23:07.683 Error Location: 0x28 00:23:07.683 LBA: 0x0 00:23:07.683 Namespace: 0x0 00:23:07.683 Vendor Log Page: 0x0 00:23:07.683 00:23:07.683 Number of Queues 00:23:07.683 ================ 00:23:07.683 Number of I/O Submission Queues: 128 00:23:07.683 Number of I/O Completion Queues: 128 00:23:07.683 00:23:07.683 ZNS Specific Controller Data 00:23:07.683 ============================ 00:23:07.683 Zone Append Size Limit: 0 00:23:07.683 00:23:07.683 00:23:07.683 Active Namespaces 00:23:07.683 ================= 00:23:07.683 get_feature(0x05) failed 00:23:07.683 Namespace ID:1 00:23:07.683 Command Set Identifier: NVM (00h) 00:23:07.683 Deallocate: Supported 00:23:07.683 Deallocated/Unwritten Error: Not Supported 00:23:07.683 Deallocated Read Value: Unknown 00:23:07.683 Deallocate in Write Zeroes: Not Supported 00:23:07.683 Deallocated Guard Field: 0xFFFF 00:23:07.683 Flush: Supported 00:23:07.683 Reservation: Not Supported 00:23:07.683 Namespace Sharing Capabilities: Multiple Controllers 00:23:07.683 Size (in LBAs): 1953525168 (931GiB) 00:23:07.683 Capacity (in LBAs): 1953525168 (931GiB) 00:23:07.683 Utilization (in LBAs): 1953525168 (931GiB) 00:23:07.683 UUID: 28b6ba64-5c2f-4e0d-a08a-66f6c3647b00 00:23:07.683 Thin Provisioning: Not Supported 00:23:07.683 Per-NS Atomic Units: Yes 00:23:07.683 Atomic Boundary Size (Normal): 0 00:23:07.683 Atomic Boundary Size (PFail): 0 00:23:07.683 Atomic Boundary Offset: 0 00:23:07.683 NGUID/EUI64 Never Reused: No 00:23:07.683 ANA group ID: 1 00:23:07.683 Namespace Write Protected: No 00:23:07.683 Number of LBA Formats: 1 00:23:07.683 Current LBA Format: LBA Format #00 00:23:07.683 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:07.683 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.683 rmmod nvme_tcp 00:23:07.683 rmmod nvme_fabrics 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.683 01:16:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:09.585 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:09.841 01:16:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:11.215 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:11.215 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:11.215 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:11.215 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:11.215 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:11.215 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:11.215 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:11.215 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:11.215 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:12.151 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:12.151 00:23:12.151 real 0m9.758s 00:23:12.151 user 0m2.031s 00:23:12.151 sys 0m3.597s 00:23:12.151 01:16:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.151 01:16:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.151 ************************************ 00:23:12.151 END TEST nvmf_identify_kernel_target 00:23:12.151 ************************************ 00:23:12.151 01:16:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:12.151 01:16:28 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:12.151 01:16:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:12.151 01:16:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.151 01:16:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.151 ************************************ 00:23:12.151 START TEST nvmf_auth_host 00:23:12.151 ************************************ 00:23:12.151 01:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:12.411 * Looking for test storage... 00:23:12.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:12.411 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.412 01:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:14.949 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:14.949 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:14.949 Found net devices under 0000:09:00.0: cvl_0_0 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:14.949 Found net devices under 0000:09:00.1: cvl_0_1 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.949 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:14.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:23:14.950 00:23:14.950 --- 10.0.0.2 ping statistics --- 00:23:14.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.950 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:23:14.950 00:23:14.950 --- 10.0.0.1 ping statistics --- 00:23:14.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.950 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=41819 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 41819 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 41819 ']' 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e7f94151041f365b46de2fe350ab557f 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kNp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e7f94151041f365b46de2fe350ab557f 0 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e7f94151041f365b46de2fe350ab557f 0 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e7f94151041f365b46de2fe350ab557f 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kNp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kNp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kNp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b2bff5c212c0da9bb60e27cbe149d7066b80a803f467afe812af8907ca37b79f 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hOp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b2bff5c212c0da9bb60e27cbe149d7066b80a803f467afe812af8907ca37b79f 3 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b2bff5c212c0da9bb60e27cbe149d7066b80a803f467afe812af8907ca37b79f 3 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b2bff5c212c0da9bb60e27cbe149d7066b80a803f467afe812af8907ca37b79f 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hOp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hOp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hOp 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:14.950 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c656582f3c0f7a70c7f188ccba220125ad4bcd52ce41fb9e 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.b7W 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c656582f3c0f7a70c7f188ccba220125ad4bcd52ce41fb9e 0 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c656582f3c0f7a70c7f188ccba220125ad4bcd52ce41fb9e 0 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c656582f3c0f7a70c7f188ccba220125ad4bcd52ce41fb9e 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.b7W 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.b7W 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.b7W 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=398208d33d8ce4840c3ada0ede63c1da6866be7d6fa3b2b8 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sZA 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 398208d33d8ce4840c3ada0ede63c1da6866be7d6fa3b2b8 2 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 398208d33d8ce4840c3ada0ede63c1da6866be7d6fa3b2b8 2 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=398208d33d8ce4840c3ada0ede63c1da6866be7d6fa3b2b8 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:15.210 01:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sZA 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sZA 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sZA 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a2fba2f17c3b9676d64c3e151b1abbf4 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sYA 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a2fba2f17c3b9676d64c3e151b1abbf4 1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a2fba2f17c3b9676d64c3e151b1abbf4 1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a2fba2f17c3b9676d64c3e151b1abbf4 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sYA 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sYA 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.sYA 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=070629ccbd64970842c6b0d46021844a 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kyZ 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 070629ccbd64970842c6b0d46021844a 1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 070629ccbd64970842c6b0d46021844a 1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=070629ccbd64970842c6b0d46021844a 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kyZ 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kyZ 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kyZ 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b4911963594b527a702fe451a57cce04b8853bb8d5da0bf 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oCy 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b4911963594b527a702fe451a57cce04b8853bb8d5da0bf 2 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b4911963594b527a702fe451a57cce04b8853bb8d5da0bf 2 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b4911963594b527a702fe451a57cce04b8853bb8d5da0bf 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oCy 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oCy 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oCy 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=981ad00c09a3d3689d14916ca41d3c6f 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DAz 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 981ad00c09a3d3689d14916ca41d3c6f 0 00:23:15.210 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 981ad00c09a3d3689d14916ca41d3c6f 0 00:23:15.211 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.211 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:15.211 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=981ad00c09a3d3689d14916ca41d3c6f 00:23:15.211 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:15.211 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DAz 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DAz 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DAz 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10ed82f0f875345dbcd832afa65771427003f1078c689ee80d557a65182a0246 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cgz 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10ed82f0f875345dbcd832afa65771427003f1078c689ee80d557a65182a0246 3 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10ed82f0f875345dbcd832afa65771427003f1078c689ee80d557a65182a0246 3 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10ed82f0f875345dbcd832afa65771427003f1078c689ee80d557a65182a0246 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cgz 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cgz 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cgz 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 41819 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 41819 ']' 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.469 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kNp 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hOp ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hOp 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.b7W 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sZA ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sZA 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.sYA 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kyZ ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kyZ 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oCy 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DAz ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DAz 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cgz 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:15.728 01:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:17.102 Waiting for block devices as requested 00:23:17.102 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:17.102 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:17.102 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:17.102 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:17.102 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:17.102 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:17.374 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:17.374 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:17.374 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:17.637 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:17.637 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:17.637 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:17.906 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:17.906 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:17.906 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:17.906 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:18.199 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:18.457 No valid GPT data, bailing 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:18.457 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:18.717 00:23:18.717 Discovery Log Number of Records 2, Generation counter 2 00:23:18.717 =====Discovery Log Entry 0====== 00:23:18.717 trtype: tcp 00:23:18.717 adrfam: ipv4 00:23:18.717 subtype: current discovery subsystem 00:23:18.717 treq: not specified, sq flow control disable supported 00:23:18.717 portid: 1 00:23:18.717 trsvcid: 4420 00:23:18.717 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:18.717 traddr: 10.0.0.1 00:23:18.717 eflags: none 00:23:18.717 sectype: none 00:23:18.717 =====Discovery Log Entry 1====== 00:23:18.717 trtype: tcp 00:23:18.717 adrfam: ipv4 00:23:18.717 subtype: nvme subsystem 00:23:18.717 treq: not specified, sq flow control disable supported 00:23:18.717 portid: 1 00:23:18.717 trsvcid: 4420 00:23:18.717 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:18.717 traddr: 10.0.0.1 00:23:18.717 eflags: none 00:23:18.717 sectype: none 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.717 nvme0n1 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.717 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:18.975 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 nvme0n1 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.234 01:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.234 nvme0n1 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.234 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.492 nvme0n1 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.492 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.493 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.751 nvme0n1 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.751 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.010 nvme0n1 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.010 01:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.268 nvme0n1 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.268 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.269 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.269 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.527 nvme0n1 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.527 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.528 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.528 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.528 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.528 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.528 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.528 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.528 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.787 nvme0n1 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.787 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.046 nvme0n1 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:21.046 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.047 01:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.305 nvme0n1 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.305 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.306 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.564 nvme0n1 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.564 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.822 nvme0n1 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.822 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.823 01:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.081 nvme0n1 00:23:22.081 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.081 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.081 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.081 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.081 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.081 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.339 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.340 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.597 nvme0n1 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.598 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.857 nvme0n1 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.857 01:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.422 nvme0n1 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.422 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.423 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.989 nvme0n1 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.989 01:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.556 nvme0n1 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.556 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.122 nvme0n1 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.122 01:16:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.123 01:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.123 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.123 01:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.688 nvme0n1 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.688 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.689 01:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.623 nvme0n1 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.623 01:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.624 01:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.624 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.624 01:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.557 nvme0n1 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.557 01:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.489 nvme0n1 00:23:28.489 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.489 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.489 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.490 01:16:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.420 nvme0n1 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.420 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.351 nvme0n1 00:23:30.351 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.351 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.351 01:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.351 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.351 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.351 01:16:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.351 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.352 nvme0n1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.352 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.610 nvme0n1 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.610 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.867 nvme0n1 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.867 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.124 nvme0n1 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.124 01:16:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.124 nvme0n1 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.380 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.381 nvme0n1 00:23:31.381 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.638 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.639 nvme0n1 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.639 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.897 nvme0n1 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.897 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.156 01:16:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.414 nvme0n1 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.414 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 nvme0n1 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:32.673 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.674 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.932 nvme0n1 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.932 01:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.191 nvme0n1 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.191 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.449 nvme0n1 00:23:33.449 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.449 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.449 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.449 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.449 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.449 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.707 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.708 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.966 nvme0n1 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.966 01:16:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.224 nvme0n1 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.224 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.225 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.825 nvme0n1 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.825 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.826 01:16:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.390 nvme0n1 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.390 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.955 nvme0n1 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.955 01:16:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.519 nvme0n1 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.519 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.084 nvme0n1 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.084 01:16:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.036 nvme0n1 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.036 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.037 01:16:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 nvme0n1 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.969 01:16:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.902 nvme0n1 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.902 01:16:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.833 nvme0n1 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.833 01:16:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.398 nvme0n1 00:23:41.398 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.398 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.398 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.398 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.398 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.398 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.656 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.657 nvme0n1 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.657 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.914 nvme0n1 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.914 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.915 01:16:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.172 nvme0n1 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.172 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.173 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.431 nvme0n1 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.431 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.690 nvme0n1 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.690 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.949 nvme0n1 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.949 01:16:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.209 nvme0n1 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.209 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.210 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.469 nvme0n1 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.469 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 nvme0n1 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.985 nvme0n1 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.985 01:16:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.244 nvme0n1 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.244 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 nvme0n1 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.809 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.067 nvme0n1 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.067 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.068 01:17:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.325 nvme0n1 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.325 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.583 nvme0n1 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.583 01:17:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.146 nvme0n1 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.146 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.147 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.711 nvme0n1 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.711 01:17:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.275 nvme0n1 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.275 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.836 nvme0n1 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.836 01:17:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.399 nvme0n1 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmOTQxNTEwNDFmMzY1YjQ2ZGUyZmUzNTBhYjU1N2a/fNit: 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjJiZmY1YzIxMmMwZGE5YmI2MGUyN2NiZTE0OWQ3MDY2YjgwYTgwM2Y0NjdhZmU4MTJhZjg5MDdjYTM3Yjc5ZhS/xuQ=: 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.399 01:17:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.331 nvme0n1 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.331 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.332 01:17:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.332 01:17:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.332 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.332 01:17:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.264 nvme0n1 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJmYmEyZjE3YzNiOTY3NmQ2NGMzZTE1MWIxYWJiZjSywDe0: 00:23:50.264 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: ]] 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwNjI5Y2NiZDY0OTcwODQyYzZiMGQ0NjAyMTg0NGHT7qr6: 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.265 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.198 nvme0n1 00:23:51.198 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.198 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.198 01:17:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.198 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.198 01:17:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI0OTExOTYzNTk0YjUyN2E3MDJmZTQ1MWE1N2NjZTA0Yjg4NTNiYjhkNWRhMGJmJJWzog==: 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxYWQwMGMwOWEzZDM2ODlkMTQ5MTZjYTQxZDNjNmaYWk7V: 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.198 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.199 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.199 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.199 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.132 nvme0n1 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBlZDgyZjBmODc1MzQ1ZGJjZDgzMmFmYTY1NzcxNDI3MDAzZjEwNzhjNjg5ZWU4MGQ1NTdhNjUxODJhMDI0Np/o7pE=: 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.132 01:17:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.117 nvme0n1 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzY1NjU4MmYzYzBmN2E3MGM3ZjE4OGNjYmEyMjAxMjVhZDRiY2Q1MmNlNDFmYjllG7L9XA==: 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk4MjA4ZDMzZDhjZTQ4NDBjM2FkYTBlZGU2M2MxZGE2ODY2YmU3ZDZmYTNiMmI4bFoAIQ==: 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.117 request: 00:23:53.117 { 00:23:53.117 "name": "nvme0", 00:23:53.117 "trtype": "tcp", 00:23:53.117 "traddr": "10.0.0.1", 00:23:53.117 "adrfam": "ipv4", 00:23:53.117 "trsvcid": "4420", 00:23:53.117 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:53.117 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:53.117 "prchk_reftag": false, 00:23:53.117 "prchk_guard": false, 00:23:53.117 "hdgst": false, 00:23:53.117 "ddgst": false, 00:23:53.117 "method": "bdev_nvme_attach_controller", 00:23:53.117 "req_id": 1 00:23:53.117 } 00:23:53.117 Got JSON-RPC error response 00:23:53.117 response: 00:23:53.117 { 00:23:53.117 "code": -5, 00:23:53.117 "message": "Input/output error" 00:23:53.117 } 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:53.117 01:17:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.117 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.118 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 request: 00:23:53.376 { 00:23:53.376 "name": "nvme0", 00:23:53.376 "trtype": "tcp", 00:23:53.376 "traddr": "10.0.0.1", 00:23:53.376 "adrfam": "ipv4", 00:23:53.376 "trsvcid": "4420", 00:23:53.376 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:53.376 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:53.376 "prchk_reftag": false, 00:23:53.376 "prchk_guard": false, 00:23:53.376 "hdgst": false, 00:23:53.376 "ddgst": false, 00:23:53.376 "dhchap_key": "key2", 00:23:53.376 "method": "bdev_nvme_attach_controller", 00:23:53.376 "req_id": 1 00:23:53.376 } 00:23:53.376 Got JSON-RPC error response 00:23:53.376 response: 00:23:53.376 { 00:23:53.376 "code": -5, 00:23:53.376 "message": "Input/output error" 00:23:53.376 } 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 request: 00:23:53.376 { 00:23:53.376 "name": "nvme0", 00:23:53.376 "trtype": "tcp", 00:23:53.376 "traddr": "10.0.0.1", 00:23:53.376 "adrfam": "ipv4", 00:23:53.376 "trsvcid": "4420", 00:23:53.376 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:53.376 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:53.376 "prchk_reftag": false, 00:23:53.376 "prchk_guard": false, 00:23:53.376 "hdgst": false, 00:23:53.376 "ddgst": false, 00:23:53.376 "dhchap_key": "key1", 00:23:53.376 "dhchap_ctrlr_key": "ckey2", 00:23:53.376 "method": "bdev_nvme_attach_controller", 00:23:53.376 "req_id": 1 00:23:53.376 } 00:23:53.376 Got JSON-RPC error response 00:23:53.376 response: 00:23:53.376 { 00:23:53.376 "code": -5, 00:23:53.376 "message": "Input/output error" 00:23:53.376 } 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.376 rmmod nvme_tcp 00:23:53.376 rmmod nvme_fabrics 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 41819 ']' 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 41819 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 41819 ']' 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 41819 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 41819 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 41819' 00:23:53.376 killing process with pid 41819 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 41819 00:23:53.376 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 41819 00:23:53.635 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.635 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.635 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.635 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.635 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.635 01:17:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.636 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.636 01:17:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:56.169 01:17:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:57.103 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:57.103 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:57.103 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:57.103 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:57.103 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:57.103 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:57.103 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:57.103 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:57.103 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:57.104 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:57.104 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:57.104 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:57.104 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:57.104 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:57.104 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:57.104 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:58.052 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:58.052 01:17:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kNp /tmp/spdk.key-null.b7W /tmp/spdk.key-sha256.sYA /tmp/spdk.key-sha384.oCy /tmp/spdk.key-sha512.cgz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:58.052 01:17:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:59.425 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:59.425 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:59.425 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:59.425 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:59.425 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:59.425 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:59.425 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:59.425 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:59.425 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:59.425 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:59.425 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:59.425 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:59.425 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:59.425 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:59.425 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:59.425 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:59.425 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:59.684 00:23:59.684 real 0m47.327s 00:23:59.684 user 0m44.392s 00:23:59.684 sys 0m6.024s 00:23:59.684 01:17:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.684 01:17:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.684 ************************************ 00:23:59.684 END TEST nvmf_auth_host 00:23:59.684 ************************************ 00:23:59.684 01:17:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:59.684 01:17:15 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:59.684 01:17:15 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.684 01:17:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:59.684 01:17:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.684 01:17:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.684 ************************************ 00:23:59.684 START TEST nvmf_digest 00:23:59.684 ************************************ 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.684 * Looking for test storage... 00:23:59.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.684 01:17:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:02.215 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:02.215 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:02.215 Found net devices under 0000:09:00.0: cvl_0_0 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:02.215 Found net devices under 0000:09:00.1: cvl_0_1 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:02.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:24:02.215 00:24:02.215 --- 10.0.0.2 ping statistics --- 00:24:02.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.215 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:02.215 00:24:02.215 --- 10.0.0.1 ping statistics --- 00:24:02.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.215 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.215 ************************************ 00:24:02.215 START TEST nvmf_digest_clean 00:24:02.215 ************************************ 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:02.215 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=51006 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 51006 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 51006 ']' 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.216 01:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.216 [2024-07-16 01:17:17.898573] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:02.216 [2024-07-16 01:17:17.898647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.216 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.216 [2024-07-16 01:17:17.963980] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.216 [2024-07-16 01:17:18.067811] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.216 [2024-07-16 01:17:18.067864] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.216 [2024-07-16 01:17:18.067876] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.216 [2024-07-16 01:17:18.067887] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.216 [2024-07-16 01:17:18.067896] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.216 [2024-07-16 01:17:18.067921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.216 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.474 null0 00:24:02.474 [2024-07-16 01:17:18.224136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.474 [2024-07-16 01:17:18.248379] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=51031 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 51031 /var/tmp/bperf.sock 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 51031 ']' 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:02.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.474 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.474 [2024-07-16 01:17:18.293530] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:02.474 [2024-07-16 01:17:18.293594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51031 ] 00:24:02.474 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.474 [2024-07-16 01:17:18.349903] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.474 [2024-07-16 01:17:18.455230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.732 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.732 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:02.732 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:02.732 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:02.732 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:02.990 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.990 01:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:03.555 nvme0n1 00:24:03.555 01:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:03.555 01:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:03.555 Running I/O for 2 seconds... 00:24:05.452 00:24:05.452 Latency(us) 00:24:05.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.452 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:05.452 nvme0n1 : 2.00 17717.53 69.21 0.00 0.00 7215.47 3737.98 19903.53 00:24:05.452 =================================================================================================================== 00:24:05.452 Total : 17717.53 69.21 0.00 0.00 7215.47 3737.98 19903.53 00:24:05.452 0 00:24:05.452 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:05.452 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:05.452 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:05.452 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:05.452 | select(.opcode=="crc32c") 00:24:05.452 | "\(.module_name) \(.executed)"' 00:24:05.452 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:05.709 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:05.709 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 51031 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 51031 ']' 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 51031 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.710 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51031 00:24:05.967 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.967 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.967 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51031' 00:24:05.967 killing process with pid 51031 00:24:05.967 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 51031 00:24:05.967 Received shutdown signal, test time was about 2.000000 seconds 00:24:05.967 00:24:05.967 Latency(us) 00:24:05.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.967 =================================================================================================================== 00:24:05.967 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.967 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 51031 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=51556 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 51556 /var/tmp/bperf.sock 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 51556 ']' 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.225 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:06.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:06.226 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.226 01:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.226 [2024-07-16 01:17:22.020216] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:06.226 [2024-07-16 01:17:22.020321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51556 ] 00:24:06.226 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.226 Zero copy mechanism will not be used. 00:24:06.226 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.226 [2024-07-16 01:17:22.078190] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.226 [2024-07-16 01:17:22.180678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.226 01:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.226 01:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:06.226 01:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:06.226 01:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:06.226 01:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:06.791 01:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.791 01:17:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.047 nvme0n1 00:24:07.047 01:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:07.047 01:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:07.308 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:07.308 Zero copy mechanism will not be used. 00:24:07.308 Running I/O for 2 seconds... 00:24:09.201 00:24:09.201 Latency(us) 00:24:09.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.201 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:09.201 nvme0n1 : 2.00 4841.61 605.20 0.00 0.00 3300.28 813.13 11990.66 00:24:09.201 =================================================================================================================== 00:24:09.201 Total : 4841.61 605.20 0.00 0.00 3300.28 813.13 11990.66 00:24:09.201 0 00:24:09.201 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:09.201 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:09.201 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:09.201 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:09.201 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:09.201 | select(.opcode=="crc32c") 00:24:09.201 | "\(.module_name) \(.executed)"' 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 51556 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 51556 ']' 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 51556 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51556 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51556' 00:24:09.459 killing process with pid 51556 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 51556 00:24:09.459 Received shutdown signal, test time was about 2.000000 seconds 00:24:09.459 00:24:09.459 Latency(us) 00:24:09.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.459 =================================================================================================================== 00:24:09.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.459 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 51556 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=51965 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 51965 /var/tmp/bperf.sock 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 51965 ']' 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:09.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.717 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:09.975 [2024-07-16 01:17:25.719173] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:09.975 [2024-07-16 01:17:25.719271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51965 ] 00:24:09.975 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.975 [2024-07-16 01:17:25.780865] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.975 [2024-07-16 01:17:25.888563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.975 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.975 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:09.975 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:09.975 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:09.975 01:17:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:10.540 01:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.540 01:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.798 nvme0n1 00:24:10.798 01:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:10.798 01:17:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:11.081 Running I/O for 2 seconds... 00:24:12.983 00:24:12.983 Latency(us) 00:24:12.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.983 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:12.983 nvme0n1 : 2.00 22312.10 87.16 0.00 0.00 5727.48 2536.49 14369.37 00:24:12.983 =================================================================================================================== 00:24:12.983 Total : 22312.10 87.16 0.00 0.00 5727.48 2536.49 14369.37 00:24:12.983 0 00:24:12.983 01:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:12.983 01:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:12.983 01:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:12.983 01:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:12.983 01:17:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:12.983 | select(.opcode=="crc32c") 00:24:12.983 | "\(.module_name) \(.executed)"' 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 51965 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 51965 ']' 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 51965 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51965 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51965' 00:24:13.241 killing process with pid 51965 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 51965 00:24:13.241 Received shutdown signal, test time was about 2.000000 seconds 00:24:13.241 00:24:13.241 Latency(us) 00:24:13.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.241 =================================================================================================================== 00:24:13.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.241 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 51965 00:24:13.499 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:13.499 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:13.499 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=52371 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 52371 /var/tmp/bperf.sock 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 52371 ']' 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.500 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:13.500 [2024-07-16 01:17:29.439559] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:13.500 [2024-07-16 01:17:29.439649] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52371 ] 00:24:13.500 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.500 Zero copy mechanism will not be used. 00:24:13.500 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.758 [2024-07-16 01:17:29.499127] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.758 [2024-07-16 01:17:29.606976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.758 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.758 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:13.758 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:13.758 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:13.758 01:17:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:14.016 01:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.016 01:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.582 nvme0n1 00:24:14.582 01:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:14.582 01:17:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.582 Zero copy mechanism will not be used. 00:24:14.582 Running I/O for 2 seconds... 00:24:16.481 00:24:16.481 Latency(us) 00:24:16.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.481 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:16.481 nvme0n1 : 2.00 5687.40 710.92 0.00 0.00 2806.16 2184.53 8349.77 00:24:16.481 =================================================================================================================== 00:24:16.481 Total : 5687.40 710.92 0.00 0.00 2806.16 2184.53 8349.77 00:24:16.481 0 00:24:16.481 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:16.481 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:16.481 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:16.481 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:16.481 | select(.opcode=="crc32c") 00:24:16.481 | "\(.module_name) \(.executed)"' 00:24:16.481 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 52371 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 52371 ']' 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 52371 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 52371 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52371' 00:24:16.739 killing process with pid 52371 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 52371 00:24:16.739 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.739 00:24:16.739 Latency(us) 00:24:16.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.739 =================================================================================================================== 00:24:16.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.739 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 52371 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 51006 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 51006 ']' 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 51006 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51006 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51006' 00:24:16.997 killing process with pid 51006 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 51006 00:24:16.997 01:17:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 51006 00:24:17.255 00:24:17.255 real 0m15.367s 00:24:17.255 user 0m29.580s 00:24:17.255 sys 0m4.575s 00:24:17.255 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:17.255 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:17.255 ************************************ 00:24:17.255 END TEST nvmf_digest_clean 00:24:17.255 ************************************ 00:24:17.255 01:17:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:17.255 01:17:33 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:17.255 01:17:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:17.255 01:17:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.255 01:17:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.513 ************************************ 00:24:17.513 START TEST nvmf_digest_error 00:24:17.513 ************************************ 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=52922 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 52922 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 52922 ']' 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.513 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.513 [2024-07-16 01:17:33.310798] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:17.513 [2024-07-16 01:17:33.310881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.513 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.513 [2024-07-16 01:17:33.373107] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.513 [2024-07-16 01:17:33.477527] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.513 [2024-07-16 01:17:33.477594] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.513 [2024-07-16 01:17:33.477607] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.513 [2024-07-16 01:17:33.477617] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.513 [2024-07-16 01:17:33.477626] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.513 [2024-07-16 01:17:33.477658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-07-16 01:17:33.562297] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 null0 00:24:17.771 [2024-07-16 01:17:33.674585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.771 [2024-07-16 01:17:33.698787] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=52955 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 52955 /var/tmp/bperf.sock 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 52955 ']' 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.771 01:17:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-07-16 01:17:33.742764] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:17.771 [2024-07-16 01:17:33.742823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52955 ] 00:24:18.029 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.029 [2024-07-16 01:17:33.799982] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.029 [2024-07-16 01:17:33.904875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.029 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.029 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:18.029 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.029 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.594 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:18.594 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.594 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.594 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.594 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.594 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.852 nvme0n1 00:24:18.852 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:18.852 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.852 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.852 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.852 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:18.852 01:17:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.110 Running I/O for 2 seconds... 00:24:19.110 [2024-07-16 01:17:34.896886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.896940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.896965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:34.912242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.912275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.912316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:34.925673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.925703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.925736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:34.939229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.939259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.939293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:34.953661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.953718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.953741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:34.969350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.969380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.969426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:34.981516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.981545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.981568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:34.995631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:34.995674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:34.995694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:35.013102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:35.013132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:35.013149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:35.027703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:35.027734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:35.027752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:35.040219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:35.040249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:35.040265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:35.056576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:35.056605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:35.056622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:35.069671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:35.069701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:35.069721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:35.082264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:35.082293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:35.082308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.110 [2024-07-16 01:17:35.096347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.110 [2024-07-16 01:17:35.096382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.110 [2024-07-16 01:17:35.096402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.368 [2024-07-16 01:17:35.110705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.368 [2024-07-16 01:17:35.110734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.368 [2024-07-16 01:17:35.110753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.368 [2024-07-16 01:17:35.124641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.368 [2024-07-16 01:17:35.124670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.368 [2024-07-16 01:17:35.124688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.368 [2024-07-16 01:17:35.138504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.368 [2024-07-16 01:17:35.138533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.368 [2024-07-16 01:17:35.138556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.368 [2024-07-16 01:17:35.153389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.368 [2024-07-16 01:17:35.153419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.368 [2024-07-16 01:17:35.153449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.368 [2024-07-16 01:17:35.170445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.368 [2024-07-16 01:17:35.170477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.170495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.184095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.184126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.184159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.197903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.197933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.197981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.211731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.211776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.211793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.225722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.225766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.225784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.239592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.239622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.239639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.253785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.253825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.253842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.267141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.267171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.267187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.282582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.282626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.282642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.297818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.297847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.297863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.311136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.311177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.311196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.324348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.324378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.324394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.339859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.339887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.339908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.369 [2024-07-16 01:17:35.355112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.369 [2024-07-16 01:17:35.355159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.369 [2024-07-16 01:17:35.355183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.367092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.367123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.367139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.382521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.382551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.382567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.396691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.396720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.396736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.412046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.412080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.412098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.426924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.426977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.426995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.441719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.441749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.441765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.456461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.456492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.456509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.470887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.470917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.470948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.485851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.485882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.485912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.500438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.500482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.500498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.514734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.514778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.514794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.529174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.529205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.529222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.543734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.543763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.543779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.557974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.558017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.558034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.572526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.572555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.572570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.586820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.586850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.586870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.601151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.601182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.601198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.627 [2024-07-16 01:17:35.615420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.627 [2024-07-16 01:17:35.615449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.627 [2024-07-16 01:17:35.615465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.630563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.630594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.630619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.646128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.646159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.646176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.660375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.660405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.660422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.674500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.674530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.674545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.689030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.689063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.689080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.703243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.703299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.703315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.717507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.717542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.717559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.731699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.731729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.731745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.746028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.746058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.746075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.884 [2024-07-16 01:17:35.760365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.884 [2024-07-16 01:17:35.760395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.884 [2024-07-16 01:17:35.760412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.774698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.774728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.774758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.788897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.788927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.788964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.803270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.803300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.803331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.817603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.817648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.817663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.831948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.831999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.832017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.846292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.846322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.846338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.860500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.860528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.860543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.885 [2024-07-16 01:17:35.874688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:19.885 [2024-07-16 01:17:35.874717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.885 [2024-07-16 01:17:35.874732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.142 [2024-07-16 01:17:35.889341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.889369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.889385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:35.903580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.903609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.903624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:35.917843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.917872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.917888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:35.934866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.934896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.934913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:35.949313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.949344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.949360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:35.963510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.963540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.963563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:35.977881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.977911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.977944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:35.992220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:35.992265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:35.992292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.006395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.006441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.006466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.020615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.020673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.020699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.034797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.034838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.034862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.049534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.049565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.049580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.063818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.063849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.063865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.078078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.078108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.078139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.092487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.092522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.092539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.106678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.106707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.106723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.121021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.121067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.121084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.143 [2024-07-16 01:17:36.135390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.143 [2024-07-16 01:17:36.135421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.143 [2024-07-16 01:17:36.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.149777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.149807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.401 [2024-07-16 01:17:36.149822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.163572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.163604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.401 [2024-07-16 01:17:36.163621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.177906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.177936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.401 [2024-07-16 01:17:36.177952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.193254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.193299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.401 [2024-07-16 01:17:36.193315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.207524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.207553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.401 [2024-07-16 01:17:36.207569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.221137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.221168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.401 [2024-07-16 01:17:36.221185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.236604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.236635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.401 [2024-07-16 01:17:36.236668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.401 [2024-07-16 01:17:36.248727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.401 [2024-07-16 01:17:36.248756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.248772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.264586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.264615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.264631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.278765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.278804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.278822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.291216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.291263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.291280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.307030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.307063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.307081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.322358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.322389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.322406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.337114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.337153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.350706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.350736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.350751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.365292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.365322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.365337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.402 [2024-07-16 01:17:36.382218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.402 [2024-07-16 01:17:36.382249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.402 [2024-07-16 01:17:36.382266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.396756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.396787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.396804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.411357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.411389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.411406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.425892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.425924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.425975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.440429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.440471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.440489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.454963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.454995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.455024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.469569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.469617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.469642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.484190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.484232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.484250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.499294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.499325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.499343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.513873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.513903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.513920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.528467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.528497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.528514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.542878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.542910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.542926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.557729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.557760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.557777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.572445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.572491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.572507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.587054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.587086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.587126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.601577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.601607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.601622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.615965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.616031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.616052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.630490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.630535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.630577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.661 [2024-07-16 01:17:36.645101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.661 [2024-07-16 01:17:36.645157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-07-16 01:17:36.645177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.659803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.659860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.659880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.674431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.674462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.689033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.689065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.689083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.703671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.703703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.703721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.718210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.718261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.718279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.733088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.733121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.733139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.748689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.748721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.748739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.763163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.763211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.763228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.777610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.777642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.777658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.792138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.792187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.792203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.806545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.806577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.806608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.821076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.821122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.821139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.835496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.835542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.835559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.850170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.850202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.850219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.864483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.864528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.864544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 [2024-07-16 01:17:36.878938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd5f00) 00:24:20.920 [2024-07-16 01:17:36.878989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.920 [2024-07-16 01:17:36.879008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.920 00:24:20.920 Latency(us) 00:24:20.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.920 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:20.920 nvme0n1 : 2.00 17612.11 68.80 0.00 0.00 7257.41 3422.44 17767.54 00:24:20.920 =================================================================================================================== 00:24:20.920 Total : 17612.11 68.80 0.00 0.00 7257.41 3422.44 17767.54 00:24:20.920 0 00:24:20.920 01:17:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:20.920 01:17:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:20.920 01:17:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:20.920 | .driver_specific 00:24:20.920 | .nvme_error 00:24:20.920 | .status_code 00:24:20.920 | .command_transient_transport_error' 00:24:20.920 01:17:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:21.179 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:24:21.179 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 52955 00:24:21.179 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 52955 ']' 00:24:21.179 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 52955 00:24:21.179 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:21.179 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.179 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 52955 00:24:21.437 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:21.437 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:21.437 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52955' 00:24:21.437 killing process with pid 52955 00:24:21.438 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 52955 00:24:21.438 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.438 00:24:21.438 Latency(us) 00:24:21.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.438 =================================================================================================================== 00:24:21.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.438 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 52955 00:24:21.720 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:21.720 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=53359 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 53359 /var/tmp/bperf.sock 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 53359 ']' 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:21.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.721 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.721 [2024-07-16 01:17:37.491182] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:21.721 [2024-07-16 01:17:37.491271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53359 ] 00:24:21.721 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:21.721 Zero copy mechanism will not be used. 00:24:21.721 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.721 [2024-07-16 01:17:37.548499] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.721 [2024-07-16 01:17:37.651048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.979 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.979 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:21.979 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.979 01:17:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:22.236 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:22.236 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.236 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.236 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.236 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.236 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.493 nvme0n1 00:24:22.493 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:22.493 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.493 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.493 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.493 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:22.493 01:17:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.752 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.752 Zero copy mechanism will not be used. 00:24:22.752 Running I/O for 2 seconds... 00:24:22.752 [2024-07-16 01:17:38.579705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.579770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.579791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.585319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.585353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.585370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.590948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.590990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.591009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.596922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.596962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.596982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.602683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.602717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.602735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.608373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.608407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.608425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.614810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.614869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.614887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.622688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.622723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.622741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.630750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.630784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.630817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.638654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.638687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.638720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.646545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.646579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.646596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.654485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.654518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.654535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.662421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.662454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.662471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.669742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.669775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.669792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.677383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.677419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.677438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.681680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.681729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.689310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.689343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.689360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.696949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.696991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.697009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.704697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.704745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.704763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.712426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.712460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.712479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.719528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.719575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.719592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.726813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.726859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.726876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.733729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.733763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.733794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.752 [2024-07-16 01:17:38.741023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:22.752 [2024-07-16 01:17:38.741056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.752 [2024-07-16 01:17:38.741082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.747425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.747473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.747491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.754039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.754071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.754096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.760754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.760786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.760804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.767495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.767528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.767561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.774325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.774358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.774391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.781263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.781310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.781327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.788332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.788381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.788399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.795139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.795173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.795192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.801981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.802022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.802042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.808803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.016 [2024-07-16 01:17:38.808852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.016 [2024-07-16 01:17:38.808871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.016 [2024-07-16 01:17:38.815097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.815132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.815150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.821265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.821298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.821331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.827721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.827769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.827786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.833359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.833391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.833409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.839019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.839052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.839069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.844780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.844822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.850688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.850721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.850739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.856546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.856594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.856611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.862483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.862516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.862533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.868424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.868457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.868475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.874126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.874158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.874176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.879934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.879978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.879998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.885598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.885632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.885650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.891334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.891366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.891384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.897026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.897058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.897077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.900945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.900989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.901017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.905605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.905635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.905651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.911428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.911476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.911495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.916728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.916759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.916777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.922521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.922553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.922584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.928349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.928381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.928413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.934249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.934282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.934299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.939948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.940004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.940026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.945760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.945807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.945825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.951487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.951534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.951552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.957318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.957349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.957367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.963018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.963068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.963087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.968689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.968720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.968753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-16 01:17:38.974655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.017 [2024-07-16 01:17:38.974688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-16 01:17:38.974706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.018 [2024-07-16 01:17:38.980498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.018 [2024-07-16 01:17:38.980529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-16 01:17:38.980546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.018 [2024-07-16 01:17:38.986259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.018 [2024-07-16 01:17:38.986291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-16 01:17:38.986309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.018 [2024-07-16 01:17:38.991915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.018 [2024-07-16 01:17:38.991948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-16 01:17:38.991974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.018 [2024-07-16 01:17:38.997570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.018 [2024-07-16 01:17:38.997602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-16 01:17:38.997640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.018 [2024-07-16 01:17:39.003359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.018 [2024-07-16 01:17:39.003404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-16 01:17:39.003420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.018 [2024-07-16 01:17:39.008917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.018 [2024-07-16 01:17:39.008949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-16 01:17:39.008976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.014474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.014506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.014523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.020056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.020087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.020105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.025688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.025720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.025737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.031370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.031402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.031419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.037044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.037075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.037092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.042622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.042655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.042673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.049014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.049053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.049072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.056711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.056744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.056762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.064663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.064698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.064715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.072914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.072948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.072978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.080915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.080948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.080978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.088376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.088412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.088446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.096559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.096593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.096610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.104581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.104614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.104645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.112652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.112699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.112717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.120697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.120729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.276 [2024-07-16 01:17:39.120746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.276 [2024-07-16 01:17:39.128952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.276 [2024-07-16 01:17:39.129010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.129028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.136562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.136593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.136611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.144521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.144555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.144589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.152383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.152432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.152450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.159897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.159946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.159973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.167482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.167514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.167532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.175195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.175227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.175245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.181768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.181800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.181824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.189164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.189197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.189215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.195268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.195301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.195318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.200983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.201015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.201034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.206578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.206613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.206631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.212344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.212375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.212393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.217824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.217854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.217871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.223655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.223686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.223718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.229530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.229561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.229593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.235309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.235347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.235380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.241047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.241079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.241097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.246864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.246895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.246912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.252621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.252652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.252684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.258297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.258342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.258359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-16 01:17:39.263898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.277 [2024-07-16 01:17:39.263931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-16 01:17:39.263972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.269668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.269699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.269717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.275308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.275338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.275355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.281083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.281115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.281132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.286830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.286862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.286879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.292728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.292759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.292790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.298011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.298044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.298066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.301487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.301517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.301534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.306740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.306771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.306787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.312536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.312566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.312583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.318308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.318339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.318356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.324221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.324270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.324288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.330044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.330076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.330102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.335882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.335912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.335929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.341740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.341769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.341800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.347554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.347584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.347601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.353250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.353280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.359066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.359097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.359114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.364803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.364848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.364865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.370623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.370653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.370669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.376482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.376511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.376528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.382212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.382243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-16 01:17:39.382275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.536 [2024-07-16 01:17:39.387971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.536 [2024-07-16 01:17:39.388001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.388033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.393784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.393814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.393831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.399703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.399732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.399764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.405531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.405560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.405575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.411435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.411464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.411495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.417218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.417263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.417279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.422980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.423011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.423043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.428744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.428773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.428795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.434476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.434506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.434522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.440201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.440232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.440250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.446036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.446083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.446100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.451838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.451868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.451885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.457647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.457693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.457710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.463417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.463446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.463462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.469255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.469301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.469318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.474814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.474845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.474862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.480342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.480378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.480395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.486338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.486383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.486400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.492093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.492124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.492142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.497616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.497646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.497662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.503278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.503323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.503340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.508869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.508913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.508930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.514492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.514522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.537 [2024-07-16 01:17:39.514538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.537 [2024-07-16 01:17:39.520330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.537 [2024-07-16 01:17:39.520374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-07-16 01:17:39.520390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.538 [2024-07-16 01:17:39.525996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.538 [2024-07-16 01:17:39.526028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.538 [2024-07-16 01:17:39.526061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.531666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.531712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.531728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.537356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.537387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.537403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.543143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.543175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.543192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.548911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.548961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.548980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.554613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.554643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.554659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.560226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.560272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.560289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.565887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.565916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.571591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.571622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.571639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.577664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.577709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.577731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.583497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.583525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.583541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.796 [2024-07-16 01:17:39.589386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.796 [2024-07-16 01:17:39.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.796 [2024-07-16 01:17:39.589434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.594887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.594931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.594947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.600502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.600548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.600565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.606198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.606228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.606260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.611849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.611878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.611895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.617522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.617552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.617584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.623116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.623148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.623166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.628807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.628858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.628875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.634566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.634595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.634625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.640129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.640161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.640178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.645791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.645821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.645852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.651583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.651628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.651644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.657231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.657276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.657292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.662986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.663016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.663034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.668787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.668834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.668851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.674498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.674541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.674557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.680383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.680427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.680443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.686114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.686159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.686175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.691845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.691876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.691892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.697435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.697484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.697501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.703030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.703061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.703079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.708634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.708663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.708679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.714344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.714373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.714389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.720076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.720107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.720124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.725879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.725925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.725948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.731645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.731690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.731706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.737320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.737364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.737380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.743124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.743155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.743187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.748927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.748983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.749001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.754675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.797 [2024-07-16 01:17:39.754720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.797 [2024-07-16 01:17:39.754736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.797 [2024-07-16 01:17:39.760293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.798 [2024-07-16 01:17:39.760324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.798 [2024-07-16 01:17:39.760340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.798 [2024-07-16 01:17:39.765883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.798 [2024-07-16 01:17:39.765914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.798 [2024-07-16 01:17:39.765930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.798 [2024-07-16 01:17:39.771544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.798 [2024-07-16 01:17:39.771589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.798 [2024-07-16 01:17:39.771605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.798 [2024-07-16 01:17:39.777273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.798 [2024-07-16 01:17:39.777304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.798 [2024-07-16 01:17:39.777320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.798 [2024-07-16 01:17:39.782801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.798 [2024-07-16 01:17:39.782831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.798 [2024-07-16 01:17:39.782863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.798 [2024-07-16 01:17:39.788554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:23.798 [2024-07-16 01:17:39.788585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.798 [2024-07-16 01:17:39.788615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.794279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.794322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.794339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.799885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.799917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.799949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.805707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.805737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.805753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.811200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.811231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.811248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.816862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.816907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.816924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.822501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.822530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.822552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.828173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.828205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.828222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.834048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.834080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.834098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.839587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.839618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.839635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.844561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.844592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.844609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.847953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.847990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.848007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.853624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.853653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.853669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.859208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.859254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.859271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.865010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.865040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.865057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.870596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.870630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.870648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.876246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.876290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.876307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.881850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.881880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.881898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.887547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.887576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.887592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.893208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.893238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.893269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.898990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.899021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.899039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.904729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.904760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.904777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.910695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.910725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-16 01:17:39.910741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.056 [2024-07-16 01:17:39.916267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.056 [2024-07-16 01:17:39.916296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.916313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.921827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.921857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.921874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.927451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.927481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.927498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.933118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.933147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.933164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.938897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.938967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.944542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.944572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.944588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.950160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.950190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.950207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.955799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.955828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.955844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.961320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.961350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.966877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.966906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.966928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.972407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.972438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.972455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.978039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.978069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.978087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.983675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.983705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.983721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.989329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.989358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.989374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:39.994896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:39.994925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:39.994965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.000776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.000809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.000827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.006454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.006496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.006515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.011342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.011376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.011393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.016851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.016899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.016919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.023425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.023464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.023482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.029122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.029155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.029174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.034742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.034771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.034787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.040509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.040539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.040556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.057 [2024-07-16 01:17:40.047142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.057 [2024-07-16 01:17:40.047177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-16 01:17:40.047196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.052823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.052855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.052873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.058660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.058691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.058708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.064162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.064194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.064211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.069739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.069770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.069786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.075862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.075894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.075911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.081270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.081318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.081342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.086816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.086847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.086867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.092782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.092828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.092848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.098350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.098380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.315 [2024-07-16 01:17:40.098412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.315 [2024-07-16 01:17:40.104288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.315 [2024-07-16 01:17:40.104334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.104351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.109896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.109931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.109973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.115838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.115884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.115907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.121509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.121540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.121557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.127328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.127373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.127391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.133081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.133111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.133131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.139188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.139218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.139255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.145268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.145298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.145333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.151072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.151103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.151121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.156567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.156611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.156628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.162376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.162406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.162423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.167726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.167761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.167777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.173383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.173413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.173428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.179225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.179271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.179288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.184799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.184830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.184846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.190727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.190757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.190774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.196217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.196265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.196282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.202147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.202179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.202211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.208059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.208090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.208108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.213670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.213702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.213724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.219331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.219363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.219396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.224981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.225013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.225030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.230443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.230488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.230503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.236350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.236379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.236411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.242090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.242121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.242138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.247742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.247772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.247805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.253635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.253667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.253698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.259246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.259276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.259293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.264996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.265046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.316 [2024-07-16 01:17:40.265064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.316 [2024-07-16 01:17:40.270680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.316 [2024-07-16 01:17:40.270708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.317 [2024-07-16 01:17:40.270743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.317 [2024-07-16 01:17:40.276535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.317 [2024-07-16 01:17:40.276564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.317 [2024-07-16 01:17:40.276595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.317 [2024-07-16 01:17:40.281823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.317 [2024-07-16 01:17:40.281866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.317 [2024-07-16 01:17:40.281882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.317 [2024-07-16 01:17:40.287436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.317 [2024-07-16 01:17:40.287464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.317 [2024-07-16 01:17:40.287499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.317 [2024-07-16 01:17:40.292917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.317 [2024-07-16 01:17:40.292966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.317 [2024-07-16 01:17:40.292984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.317 [2024-07-16 01:17:40.298490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.317 [2024-07-16 01:17:40.298518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.317 [2024-07-16 01:17:40.298550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.317 [2024-07-16 01:17:40.304255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.317 [2024-07-16 01:17:40.304283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.317 [2024-07-16 01:17:40.304299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.309661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.575 [2024-07-16 01:17:40.309692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.575 [2024-07-16 01:17:40.309709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.315509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.575 [2024-07-16 01:17:40.315537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.575 [2024-07-16 01:17:40.315552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.321128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.575 [2024-07-16 01:17:40.321158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.575 [2024-07-16 01:17:40.321190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.326885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.575 [2024-07-16 01:17:40.326913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.575 [2024-07-16 01:17:40.326944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.332364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.575 [2024-07-16 01:17:40.332406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.575 [2024-07-16 01:17:40.332422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.337848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.575 [2024-07-16 01:17:40.337877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.575 [2024-07-16 01:17:40.337893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.343669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.575 [2024-07-16 01:17:40.343697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.575 [2024-07-16 01:17:40.343729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.575 [2024-07-16 01:17:40.349356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.349384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.349399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.354999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.355025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.355041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.360638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.360680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.360701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.366482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.366509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.366539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.372156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.372186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.372203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.377804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.377833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.377865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.383316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.383344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.383373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.388871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.388914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.388929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.394549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.394577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.394609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.400141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.400170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.400186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.405682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.405724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.405740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.411576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.411611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.411629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.417470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.417498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.417529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.423098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.423141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.423157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.428934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.428973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.428992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.434441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.434468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.434497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.440278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.440306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.440322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.445823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.445868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.445884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.451506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.451534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.451566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.457089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.457132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.457148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.462680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.462708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.462723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.468502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.468530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.468562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.474494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.474523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.474538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.480462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.480490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.480522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.486179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.486208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.486224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.492008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.492039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.492057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.497698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.497728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.497746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.503243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.503286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.503302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.508745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.508787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.576 [2024-07-16 01:17:40.508807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.576 [2024-07-16 01:17:40.514578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.576 [2024-07-16 01:17:40.514621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.514637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.520313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.520342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.520374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.526076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.526105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.526121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.531913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.531942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.531982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.537523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.537551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.537583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.543149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.543180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.543197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.548574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.548604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.548620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.554344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.554373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.554404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.560008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.560036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.560052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.577 [2024-07-16 01:17:40.565404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.577 [2024-07-16 01:17:40.565435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.577 [2024-07-16 01:17:40.565452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.835 [2024-07-16 01:17:40.570762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10496b0) 00:24:24.835 [2024-07-16 01:17:40.570790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.835 [2024-07-16 01:17:40.570821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.835 00:24:24.835 Latency(us) 00:24:24.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.835 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:24.835 nvme0n1 : 2.00 5220.05 652.51 0.00 0.00 3060.12 758.52 12184.84 00:24:24.835 =================================================================================================================== 00:24:24.835 Total : 5220.05 652.51 0.00 0.00 3060.12 758.52 12184.84 00:24:24.835 0 00:24:24.835 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:24.835 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:24.835 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:24.835 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:24.835 | .driver_specific 00:24:24.835 | .nvme_error 00:24:24.835 | .status_code 00:24:24.835 | .command_transient_transport_error' 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 337 > 0 )) 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 53359 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 53359 ']' 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 53359 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 53359 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53359' 00:24:25.092 killing process with pid 53359 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 53359 00:24:25.092 Received shutdown signal, test time was about 2.000000 seconds 00:24:25.092 00:24:25.092 Latency(us) 00:24:25.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.092 =================================================================================================================== 00:24:25.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.092 01:17:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 53359 00:24:25.350 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=53782 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 53782 /var/tmp/bperf.sock 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 53782 ']' 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.351 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.351 [2024-07-16 01:17:41.144200] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:25.351 [2024-07-16 01:17:41.144306] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53782 ] 00:24:25.351 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.351 [2024-07-16 01:17:41.206196] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.351 [2024-07-16 01:17:41.311302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.609 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.609 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:25.609 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.609 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.867 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:25.867 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.867 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.867 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.867 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.867 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.125 nvme0n1 00:24:26.125 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:26.125 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.125 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.125 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.125 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:26.125 01:17:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:26.125 Running I/O for 2 seconds... 00:24:26.125 [2024-07-16 01:17:42.102341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ee5c8 00:24:26.125 [2024-07-16 01:17:42.103289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.125 [2024-07-16 01:17:42.103326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.125 [2024-07-16 01:17:42.113715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fac10 00:24:26.125 [2024-07-16 01:17:42.114656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.125 [2024-07-16 01:17:42.114686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.126165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eaef0 00:24:26.388 [2024-07-16 01:17:42.127257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.127287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.138574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e1b48 00:24:26.388 [2024-07-16 01:17:42.139762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.139792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.150992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e84c0 00:24:26.388 [2024-07-16 01:17:42.152361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.152391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.163218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fac10 00:24:26.388 [2024-07-16 01:17:42.164755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.164784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.175587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f1430 00:24:26.388 [2024-07-16 01:17:42.177235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.177281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.186442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e3498 00:24:26.388 [2024-07-16 01:17:42.187667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.187712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.197172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6020 00:24:26.388 [2024-07-16 01:17:42.198863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.198893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.207222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e6300 00:24:26.388 [2024-07-16 01:17:42.207931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.207980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.220470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f1ca0 00:24:26.388 [2024-07-16 01:17:42.221467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.221511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.232695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fc128 00:24:26.388 [2024-07-16 01:17:42.233809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.233839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.244929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f1430 00:24:26.388 [2024-07-16 01:17:42.246135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.246165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.257101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fc998 00:24:26.388 [2024-07-16 01:17:42.258470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.388 [2024-07-16 01:17:42.258516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.388 [2024-07-16 01:17:42.266984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ddc00 00:24:26.389 [2024-07-16 01:17:42.267596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.267625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.279055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eaef0 00:24:26.389 [2024-07-16 01:17:42.279800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.279838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.291186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f2d80 00:24:26.389 [2024-07-16 01:17:42.292148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.292178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.302289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f5be8 00:24:26.389 [2024-07-16 01:17:42.304044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.304073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.312291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fef90 00:24:26.389 [2024-07-16 01:17:42.312987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.313022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.324427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e95a0 00:24:26.389 [2024-07-16 01:17:42.325389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.325416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.336547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eaef0 00:24:26.389 [2024-07-16 01:17:42.337637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.337665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.349673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e01f8 00:24:26.389 [2024-07-16 01:17:42.350989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.351018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.360620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eee38 00:24:26.389 [2024-07-16 01:17:42.361811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.361841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.389 [2024-07-16 01:17:42.371335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e8d30 00:24:26.389 [2024-07-16 01:17:42.372153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.389 [2024-07-16 01:17:42.372183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.383090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f0bc0 00:24:26.696 [2024-07-16 01:17:42.383682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.383720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.395080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e2c28 00:24:26.696 [2024-07-16 01:17:42.395871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.395900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.407320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fef90 00:24:26.696 [2024-07-16 01:17:42.408326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.408355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.418402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fcdd0 00:24:26.696 [2024-07-16 01:17:42.420091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.420121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.429286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e3d08 00:24:26.696 [2024-07-16 01:17:42.430074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.430103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.441246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f0bc0 00:24:26.696 [2024-07-16 01:17:42.442223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.442275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.453200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f4298 00:24:26.696 [2024-07-16 01:17:42.454193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.454224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.465139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fb8b8 00:24:26.696 [2024-07-16 01:17:42.466259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.466302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.476274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e0a68 00:24:26.696 [2024-07-16 01:17:42.477352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.477396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.489293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e5658 00:24:26.696 [2024-07-16 01:17:42.490621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.490666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.500384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eaef0 00:24:26.696 [2024-07-16 01:17:42.501615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.501643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.512480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e88f8 00:24:26.696 [2024-07-16 01:17:42.513852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.513880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.523322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e3d08 00:24:26.696 [2024-07-16 01:17:42.524254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.524283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.535134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e3060 00:24:26.696 [2024-07-16 01:17:42.535851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.535880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.548344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fac10 00:24:26.696 [2024-07-16 01:17:42.550002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.550031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.560453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e0630 00:24:26.696 [2024-07-16 01:17:42.562246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.562290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.568734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6890 00:24:26.696 [2024-07-16 01:17:42.569561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.569589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.579805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f8e88 00:24:26.696 [2024-07-16 01:17:42.580545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.580589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.592719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fac10 00:24:26.696 [2024-07-16 01:17:42.593738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.593782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.604865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190feb58 00:24:26.696 [2024-07-16 01:17:42.605894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.605923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.617030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f81e0 00:24:26.696 [2024-07-16 01:17:42.618292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.618322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.628130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e01f8 00:24:26.696 [2024-07-16 01:17:42.629323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.639110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ebfd0 00:24:26.696 [2024-07-16 01:17:42.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.639950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.651211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eb328 00:24:26.696 [2024-07-16 01:17:42.651824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.651853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.696 [2024-07-16 01:17:42.663267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e5ec8 00:24:26.696 [2024-07-16 01:17:42.664014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.696 [2024-07-16 01:17:42.664043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.675461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f2948 00:24:26.955 [2024-07-16 01:17:42.676433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.676463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.688934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fa3a0 00:24:26.955 [2024-07-16 01:17:42.690803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.690852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.697206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f8e88 00:24:26.955 [2024-07-16 01:17:42.697999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.698027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.708214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eb760 00:24:26.955 [2024-07-16 01:17:42.709014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.709043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.720345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e23b8 00:24:26.955 [2024-07-16 01:17:42.721291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.721319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.733551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fe720 00:24:26.955 [2024-07-16 01:17:42.734754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.734783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.744570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e2c28 00:24:26.955 [2024-07-16 01:17:42.745595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.745638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.757605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e6fa8 00:24:26.955 [2024-07-16 01:17:42.758962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.758991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.768414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e01f8 00:24:26.955 [2024-07-16 01:17:42.770097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.770126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.778408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f57b0 00:24:26.955 [2024-07-16 01:17:42.779122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.779151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.790551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f8618 00:24:26.955 [2024-07-16 01:17:42.791538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.791582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.803683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e73e0 00:24:26.955 [2024-07-16 01:17:42.804824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.804852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.815780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e38d0 00:24:26.955 [2024-07-16 01:17:42.816969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.816998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.827833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190df118 00:24:26.955 [2024-07-16 01:17:42.829170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.829199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.837646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ef6a8 00:24:26.955 [2024-07-16 01:17:42.838235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.955 [2024-07-16 01:17:42.838264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.955 [2024-07-16 01:17:42.849733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e6b70 00:24:26.955 [2024-07-16 01:17:42.850502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.850531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.861627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f1868 00:24:26.956 [2024-07-16 01:17:42.862761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.862790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.874789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ec408 00:24:26.956 [2024-07-16 01:17:42.876523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.876551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.886965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e99d8 00:24:26.956 [2024-07-16 01:17:42.888739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.888768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.895225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e6300 00:24:26.956 [2024-07-16 01:17:42.896018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.896047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.907265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ea680 00:24:26.956 [2024-07-16 01:17:42.907855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.907884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.919377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6cc8 00:24:26.956 [2024-07-16 01:17:42.920185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.920214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.932860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ff3c8 00:24:26.956 [2024-07-16 01:17:42.934548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.934592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.956 [2024-07-16 01:17:42.943689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190efae0 00:24:26.956 [2024-07-16 01:17:42.944934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.956 [2024-07-16 01:17:42.944970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:42.954432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fe2e8 00:24:27.215 [2024-07-16 01:17:42.956093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:42.956123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:42.965258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e7818 00:24:27.215 [2024-07-16 01:17:42.965981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:42.966010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:42.977224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f2d80 00:24:27.215 [2024-07-16 01:17:42.978094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:42.978123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:42.990439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f5378 00:24:27.215 [2024-07-16 01:17:42.991902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:42.991952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.001268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fb480 00:24:27.215 [2024-07-16 01:17:43.002371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.002400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.013224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f4b08 00:24:27.215 [2024-07-16 01:17:43.014178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.014208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.026746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f7538 00:24:27.215 [2024-07-16 01:17:43.028529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.028572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.035056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190edd58 00:24:27.215 [2024-07-16 01:17:43.035831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.035874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.048254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190dfdc0 00:24:27.215 [2024-07-16 01:17:43.050038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.050068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.059021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f2510 00:24:27.215 [2024-07-16 01:17:43.059962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.060006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.071189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e1710 00:24:27.215 [2024-07-16 01:17:43.072208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.072237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.083409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f9b30 00:24:27.215 [2024-07-16 01:17:43.084609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.084652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.094431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ff3c8 00:24:27.215 [2024-07-16 01:17:43.095545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.095586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.106592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e5a90 00:24:27.215 [2024-07-16 01:17:43.107841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.107882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.117841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f5378 00:24:27.215 [2024-07-16 01:17:43.119628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.119658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.127807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6cc8 00:24:27.215 [2024-07-16 01:17:43.128763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.128806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.140907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e12d8 00:24:27.215 [2024-07-16 01:17:43.141986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.142015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.152940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f4b08 00:24:27.215 [2024-07-16 01:17:43.154132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.154176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.163792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e4de8 00:24:27.215 [2024-07-16 01:17:43.164950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.164998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.215 [2024-07-16 01:17:43.176717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e9e10 00:24:27.215 [2024-07-16 01:17:43.178137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.215 [2024-07-16 01:17:43.178181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.216 [2024-07-16 01:17:43.188805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e4de8 00:24:27.216 [2024-07-16 01:17:43.190290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.216 [2024-07-16 01:17:43.190318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.216 [2024-07-16 01:17:43.198488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f0bc0 00:24:27.216 [2024-07-16 01:17:43.199256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.216 [2024-07-16 01:17:43.199286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.472 [2024-07-16 01:17:43.212248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e1f80 00:24:27.472 [2024-07-16 01:17:43.213880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.213921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.224386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190df118 00:24:27.473 [2024-07-16 01:17:43.226085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.226129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.235112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f9b30 00:24:27.473 [2024-07-16 01:17:43.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.236621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.245642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f4b08 00:24:27.473 [2024-07-16 01:17:43.247479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.247509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.255635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e9168 00:24:27.473 [2024-07-16 01:17:43.256650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.256692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.268191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e5a90 00:24:27.473 [2024-07-16 01:17:43.269152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.269195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.281188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fa7d8 00:24:27.473 [2024-07-16 01:17:43.282391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.282433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.294179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fbcf0 00:24:27.473 [2024-07-16 01:17:43.295858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.295906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.306345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f8e88 00:24:27.473 [2024-07-16 01:17:43.308232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.308275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.314619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f5be8 00:24:27.473 [2024-07-16 01:17:43.315451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.315493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.325537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6458 00:24:27.473 [2024-07-16 01:17:43.326384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.326427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.337667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f3e60 00:24:27.473 [2024-07-16 01:17:43.338718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.338760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.349801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e6738 00:24:27.473 [2024-07-16 01:17:43.350988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.351017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.362045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f7538 00:24:27.473 [2024-07-16 01:17:43.363357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.363401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.374525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e99d8 00:24:27.473 [2024-07-16 01:17:43.375978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.376007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.385359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e5ec8 00:24:27.473 [2024-07-16 01:17:43.386411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.386459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.397057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f5378 00:24:27.473 [2024-07-16 01:17:43.397930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.397967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.409085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f20d8 00:24:27.473 [2024-07-16 01:17:43.410222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.410251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.420109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ebfd0 00:24:27.473 [2024-07-16 01:17:43.421931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.421969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.432475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f57b0 00:24:27.473 [2024-07-16 01:17:43.433970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.433997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.444655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eea00 00:24:27.473 [2024-07-16 01:17:43.446277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.446318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.473 [2024-07-16 01:17:43.455632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ed4e8 00:24:27.473 [2024-07-16 01:17:43.456787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.473 [2024-07-16 01:17:43.456829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.467545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f8e88 00:24:27.731 [2024-07-16 01:17:43.468606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.468634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.478552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f4f40 00:24:27.731 [2024-07-16 01:17:43.480362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.480391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.489424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f0ff8 00:24:27.731 [2024-07-16 01:17:43.490286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.490312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.501460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6890 00:24:27.731 [2024-07-16 01:17:43.502514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.502556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.512564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6020 00:24:27.731 [2024-07-16 01:17:43.513637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.513678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.525706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eaab8 00:24:27.731 [2024-07-16 01:17:43.526870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.526897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.536630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fac10 00:24:27.731 [2024-07-16 01:17:43.537822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.537863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.548690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6890 00:24:27.731 [2024-07-16 01:17:43.550030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.550065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.559845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6458 00:24:27.731 [2024-07-16 01:17:43.560734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.560762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.571682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f20d8 00:24:27.731 [2024-07-16 01:17:43.572396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.572424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.583796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ec408 00:24:27.731 [2024-07-16 01:17:43.584805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.584832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.597330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e73e0 00:24:27.731 [2024-07-16 01:17:43.599092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.599139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.731 [2024-07-16 01:17:43.609496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e0ea0 00:24:27.731 [2024-07-16 01:17:43.611484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.731 [2024-07-16 01:17:43.611526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.618835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fb480 00:24:27.732 [2024-07-16 01:17:43.620058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.630969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190de470 00:24:27.732 [2024-07-16 01:17:43.632300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.632326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.643045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fda78 00:24:27.732 [2024-07-16 01:17:43.644668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.644709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.655409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e0a68 00:24:27.732 [2024-07-16 01:17:43.657153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.657179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.666331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f57b0 00:24:27.732 [2024-07-16 01:17:43.667759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.667787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.676863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e6300 00:24:27.732 [2024-07-16 01:17:43.678655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.678683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.687678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e38d0 00:24:27.732 [2024-07-16 01:17:43.688515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.688555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.699706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e3060 00:24:27.732 [2024-07-16 01:17:43.700662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.700687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.710731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f5378 00:24:27.732 [2024-07-16 01:17:43.711747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.711787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.732 [2024-07-16 01:17:43.722887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f0bc0 00:24:27.732 [2024-07-16 01:17:43.724168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.732 [2024-07-16 01:17:43.724210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.736022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fd208 00:24:27.991 [2024-07-16 01:17:43.737432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.737473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.748219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ea248 00:24:27.991 [2024-07-16 01:17:43.749655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.749681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.758003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eb760 00:24:27.991 [2024-07-16 01:17:43.758773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.758801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.770092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eea00 00:24:27.991 [2024-07-16 01:17:43.771081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.771109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.782338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fc998 00:24:27.991 [2024-07-16 01:17:43.783479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.783506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.793282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e3498 00:24:27.991 [2024-07-16 01:17:43.795037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.795065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.804137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e88f8 00:24:27.991 [2024-07-16 01:17:43.805042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.805068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.816301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ec840 00:24:27.991 [2024-07-16 01:17:43.817320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.817346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.827285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6458 00:24:27.991 [2024-07-16 01:17:43.828234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.828259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.840261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fac10 00:24:27.991 [2024-07-16 01:17:43.841540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.841580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.852386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fcdd0 00:24:27.991 [2024-07-16 01:17:43.853704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.853730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.863485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190df118 00:24:27.991 [2024-07-16 01:17:43.864737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.864763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.874410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e6b70 00:24:27.991 [2024-07-16 01:17:43.875266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.875293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.887527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f2d80 00:24:27.991 [2024-07-16 01:17:43.888937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.888971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.898265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f8618 00:24:27.991 [2024-07-16 01:17:43.899293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.899325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.910080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e49b0 00:24:27.991 [2024-07-16 01:17:43.911000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.911027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.923477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fdeb0 00:24:27.991 [2024-07-16 01:17:43.925187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.925214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.935652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190de470 00:24:27.991 [2024-07-16 01:17:43.937494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.937536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.943894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f6cc8 00:24:27.991 [2024-07-16 01:17:43.944746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.944787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.955992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190eaab8 00:24:27.991 [2024-07-16 01:17:43.956914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.956939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.966911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190ea248 00:24:27.991 [2024-07-16 01:17:43.967888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.967914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.991 [2024-07-16 01:17:43.979046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e49b0 00:24:27.991 [2024-07-16 01:17:43.980211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.991 [2024-07-16 01:17:43.980238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:43.992182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190efae0 00:24:28.250 [2024-07-16 01:17:43.993651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:43.993692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.004358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190fc560 00:24:28.250 [2024-07-16 01:17:44.005720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.005746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.015377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e8d30 00:24:28.250 [2024-07-16 01:17:44.016828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.016869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.026156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e0ea0 00:24:28.250 [2024-07-16 01:17:44.027187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.027228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.038107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190df988 00:24:28.250 [2024-07-16 01:17:44.039006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.039034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.051646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e5ec8 00:24:28.250 [2024-07-16 01:17:44.053374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.053400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.063832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e1710 00:24:28.250 [2024-07-16 01:17:44.065739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.065765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.072066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190e95a0 00:24:28.250 [2024-07-16 01:17:44.072995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.073022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.250 [2024-07-16 01:17:44.083171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb66a0) with pdu=0x2000190f8a50 00:24:28.250 [2024-07-16 01:17:44.084066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.250 [2024-07-16 01:17:44.084094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.250 00:24:28.250 Latency(us) 00:24:28.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.250 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:28.250 nvme0n1 : 2.01 21844.36 85.33 0.00 0.00 5850.00 2463.67 14466.47 00:24:28.250 =================================================================================================================== 00:24:28.250 Total : 21844.36 85.33 0.00 0.00 5850.00 2463.67 14466.47 00:24:28.250 0 00:24:28.250 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:28.250 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:28.250 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:28.250 | .driver_specific 00:24:28.250 | .nvme_error 00:24:28.250 | .status_code 00:24:28.250 | .command_transient_transport_error' 00:24:28.250 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 53782 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 53782 ']' 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 53782 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 53782 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53782' 00:24:28.508 killing process with pid 53782 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 53782 00:24:28.508 Received shutdown signal, test time was about 2.000000 seconds 00:24:28.508 00:24:28.508 Latency(us) 00:24:28.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.508 =================================================================================================================== 00:24:28.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.508 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 53782 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=54293 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 54293 /var/tmp/bperf.sock 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 54293 ']' 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:28.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.766 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.766 [2024-07-16 01:17:44.692161] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:28.766 [2024-07-16 01:17:44.692241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54293 ] 00:24:28.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:28.766 Zero copy mechanism will not be used. 00:24:28.766 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.766 [2024-07-16 01:17:44.753125] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.024 [2024-07-16 01:17:44.862590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.024 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.024 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:29.024 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:29.024 01:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:29.282 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:29.282 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.282 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.540 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.540 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:29.540 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:29.797 nvme0n1 00:24:30.056 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:30.056 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.056 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.056 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.056 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:30.056 01:17:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:30.056 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:30.056 Zero copy mechanism will not be used. 00:24:30.056 Running I/O for 2 seconds... 00:24:30.056 [2024-07-16 01:17:45.919700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.920086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.920140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.927390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.927717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.927748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.936423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.936767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.936795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.943928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.944261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.944305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.951315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.951653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.951680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.958434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.958810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.958838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.965187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.965304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.965332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.972558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.972887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.972913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.980070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.980393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.980419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.987195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.987523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.987551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:45.994589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:45.994923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:45.994961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:46.002681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.056 [2024-07-16 01:17:46.003001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.056 [2024-07-16 01:17:46.003029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.056 [2024-07-16 01:17:46.010454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.057 [2024-07-16 01:17:46.010766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.057 [2024-07-16 01:17:46.010794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.057 [2024-07-16 01:17:46.017542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.057 [2024-07-16 01:17:46.017878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.057 [2024-07-16 01:17:46.017905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.057 [2024-07-16 01:17:46.025681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.057 [2024-07-16 01:17:46.026024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.057 [2024-07-16 01:17:46.026052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.057 [2024-07-16 01:17:46.034807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.057 [2024-07-16 01:17:46.035140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.057 [2024-07-16 01:17:46.035167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.057 [2024-07-16 01:17:46.043075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.057 [2024-07-16 01:17:46.043402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.057 [2024-07-16 01:17:46.043429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.051896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.052261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.052290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.059489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.059610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.059637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.067249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.067567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.067595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.074747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.075105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.075133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.082567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.082883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.082909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.091032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.091362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.091389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.100629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.100968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.100996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.108376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.108705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.108732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.117596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.117919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.117946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.126356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.126700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.126727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.135362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.135700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.135731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.143945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.144309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.144337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.152772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.153098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.153125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.162045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.162352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.162378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.170639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.170990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.171017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.180191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.180551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.189125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.189455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.189482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.197799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.198123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.198150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.206732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.207061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.207087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.215570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.215890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.315 [2024-07-16 01:17:46.215932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.315 [2024-07-16 01:17:46.224440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.315 [2024-07-16 01:17:46.224754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.224781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.233227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.233544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.233571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.241746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.242086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.250820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.251177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.251205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.260374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.260702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.260729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.269873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.270217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.270245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.277838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.278193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.278222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.284848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.285193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.285221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.291719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.292038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.292065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.299006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.299322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.299350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.316 [2024-07-16 01:17:46.306431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.316 [2024-07-16 01:17:46.306739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.316 [2024-07-16 01:17:46.306766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.574 [2024-07-16 01:17:46.314259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.574 [2024-07-16 01:17:46.314606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.574 [2024-07-16 01:17:46.314633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.574 [2024-07-16 01:17:46.323204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.574 [2024-07-16 01:17:46.323519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.574 [2024-07-16 01:17:46.323545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.330156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.330489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.330516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.337275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.337598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.337624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.344481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.344826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.344867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.352201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.352544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.352576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.360699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.361022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.361049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.368486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.368795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.368821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.376091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.376411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.376437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.383288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.383594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.383620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.390144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.390461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.390489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.398341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.398668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.398696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.406695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.407033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.407060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.415275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.415594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.415620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.422807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.423161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.423189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.431176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.431518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.431547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.438397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.438703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.438729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.446735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.447081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.447108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.454630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.454745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.454773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.461995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.462334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.462361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.469570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.469885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.469912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.477063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.477409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.477437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.484376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.484706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.484733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.491722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.492037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.492065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.499465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.499800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.499841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.507254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.507618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.507646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.514770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.515135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.515163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.522676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.523013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.523039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.530608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.530937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.530971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.537921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.538239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.538265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.545589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.575 [2024-07-16 01:17:46.545918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.575 [2024-07-16 01:17:46.545967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.575 [2024-07-16 01:17:46.553454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.576 [2024-07-16 01:17:46.553780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.576 [2024-07-16 01:17:46.553811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.576 [2024-07-16 01:17:46.560527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.576 [2024-07-16 01:17:46.560856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.576 [2024-07-16 01:17:46.560883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.576 [2024-07-16 01:17:46.567140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.576 [2024-07-16 01:17:46.567452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.576 [2024-07-16 01:17:46.567477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.573739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.574068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.574095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.580928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.581270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.581312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.588720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.589047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.589073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.595728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.596066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.596093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.602749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.603084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.603112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.610257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.610573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.610600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.619206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.619538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.619564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.627618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.627940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.627973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.636735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.637071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.637099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.644759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.645123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.645151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.652641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.652982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.653008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.659821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.660167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.660195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.667841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.668176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.668217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.676558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.676849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.676876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.685876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.686209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.686237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.694194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.834 [2024-07-16 01:17:46.694372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.834 [2024-07-16 01:17:46.694399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.834 [2024-07-16 01:17:46.703343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.703690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.703717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.712528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.712869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.712895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.721563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.721693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.721721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.730880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.731224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.731253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.740559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.740891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.740917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.749803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.750116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.750142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.758476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.758798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.758824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.767776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.768132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.768159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.776329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.776655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.776682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.783534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.783886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.783913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.791418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.791746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.791773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.798772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.799127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.799154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.805882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.806219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.806247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.813107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.813427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.813454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.835 [2024-07-16 01:17:46.821062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:30.835 [2024-07-16 01:17:46.821383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.835 [2024-07-16 01:17:46.821411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.829535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.829901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.838329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.838659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.838686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.847135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.847252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.847280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.855363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.855678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.855705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.864359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.864682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.864709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.871384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.871715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.871741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.878419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.878743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.878772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.885769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.886087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.886115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.892800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.893029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.893057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.900026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.900349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.900382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.908667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.909028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.909056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.916352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.916681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.916709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.923932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.924270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.924295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.931626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.931979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.932005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.940036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.940376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.940402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.948058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.948397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.948423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.955373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.955702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.955728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.962871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.963231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.963288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.970150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.970506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.970532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.977531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.977858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.977883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.985469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.985774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.985802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:46.993914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:46.994260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:46.994288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:47.001769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:47.002124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:47.002152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:47.009712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:47.010074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:47.010105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:47.017725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:47.018077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:47.018104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:47.025018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.094 [2024-07-16 01:17:47.025365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-07-16 01:17:47.025391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.094 [2024-07-16 01:17:47.033076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.095 [2024-07-16 01:17:47.033430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-07-16 01:17:47.033457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.095 [2024-07-16 01:17:47.041373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.095 [2024-07-16 01:17:47.041694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-07-16 01:17:47.041721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.095 [2024-07-16 01:17:47.049727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.095 [2024-07-16 01:17:47.050078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-07-16 01:17:47.050105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.095 [2024-07-16 01:17:47.059004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.095 [2024-07-16 01:17:47.059345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-07-16 01:17:47.059371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.095 [2024-07-16 01:17:47.068249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.095 [2024-07-16 01:17:47.068582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-07-16 01:17:47.068608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.095 [2024-07-16 01:17:47.077520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.095 [2024-07-16 01:17:47.077840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-07-16 01:17:47.077867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.095 [2024-07-16 01:17:47.086258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.095 [2024-07-16 01:17:47.086593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-07-16 01:17:47.086633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.353 [2024-07-16 01:17:47.095242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.095592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.095618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.103743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.103967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.103995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.113162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.113495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.113529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.121713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.121908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.121935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.130512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.130965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.130991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.138860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.139191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.139219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.146550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.146891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.146920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.155202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.155656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.155682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.163310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.163649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.163676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.171978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.172291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.172319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.178719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.179074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.179102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.186168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.186475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.186502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.192774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.193144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.193172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.199552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.199883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.199909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.206167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.206495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.206522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.212733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.213078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.213106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.219879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.220184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.220212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.227509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.227874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.227903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.235362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.235688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.235715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.243121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.243428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.243455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.250580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.250888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.250916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.259074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.259379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.259422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.266603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.266922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.266951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.273782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.274102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.274130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.280632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.354 [2024-07-16 01:17:47.280924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.354 [2024-07-16 01:17:47.280973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.354 [2024-07-16 01:17:47.287255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.287584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.287612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.293895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.294221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.294249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.301456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.301783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.301809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.308127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.308446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.308477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.314682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.315003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.315030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.321349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.321662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.321691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.328085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.328425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.334570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.334895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.334921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.355 [2024-07-16 01:17:47.341409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.355 [2024-07-16 01:17:47.341734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.355 [2024-07-16 01:17:47.341761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.348988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.349297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.355208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.355514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.355541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.361966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.362266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.362309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.369005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.369314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.369340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.375885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.376212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.376238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.383373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.383726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.383753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.390864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.391197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.391224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.398456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.398789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.398815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.406007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.406343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.406369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.413503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.413842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.413884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.420370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.420679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.420705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.427042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.427340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.427372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.433434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.433755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.433783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.440153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.440462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.440489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.447750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.448108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.448136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.454322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.454659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.454701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.460933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.461254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.461282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.467968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.468282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.468310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.474840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.475200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.475227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.482447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.482751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.482777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.490028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.614 [2024-07-16 01:17:47.490378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.614 [2024-07-16 01:17:47.490418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.614 [2024-07-16 01:17:47.497790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.498108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.498136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.505429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.505755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.505782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.513132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.513463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.513489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.520769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.521099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.521128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.528436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.528768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.528795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.536007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.536317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.536358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.543465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.543793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.543833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.551084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.551413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.551439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.558795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.559123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.559151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.566365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.566708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.566735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.573916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.574235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.574277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.581395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.581736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.581763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.589026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.589333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.589360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.596603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.596946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.596993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.615 [2024-07-16 01:17:47.604210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.615 [2024-07-16 01:17:47.604523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.615 [2024-07-16 01:17:47.604550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.611935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.612248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.612276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.618732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.619030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.619062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.625217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.625506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.625534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.631875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.632172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.632200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.638587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.638896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.638922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.645725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.646068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.646094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.652436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.652734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.652760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.658665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.658984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.659026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.665401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.665707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.665733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.672137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.672431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.672458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.678686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.679042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.679069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.686100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.686394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.686435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.694040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.694350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.694378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.700935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.701239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.708311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.708623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.708650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.714610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.714914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.714940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.721096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.721417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.721445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.728155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.728443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.728469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.735085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.735367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.735395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.741849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.742148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.742176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.748615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.748946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.748995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.755982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.756274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.756302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.762505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.762809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.762836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.769569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.874 [2024-07-16 01:17:47.769876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.874 [2024-07-16 01:17:47.769903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.874 [2024-07-16 01:17:47.777024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.777324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.777351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.784527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.784832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.784858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.792837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.793162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.793190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.801535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.801922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.801962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.810590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.811001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.811027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.817737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.818077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.818104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.825279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.825645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.825673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.832205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.832511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.832539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.839347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.839672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.839699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.846102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.846414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.846441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.853043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.853377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.853403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.875 [2024-07-16 01:17:47.860020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:31.875 [2024-07-16 01:17:47.860316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.875 [2024-07-16 01:17:47.860344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.133 [2024-07-16 01:17:47.867426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:32.133 [2024-07-16 01:17:47.867720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.133 [2024-07-16 01:17:47.867747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.133 [2024-07-16 01:17:47.875926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:32.133 [2024-07-16 01:17:47.876312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.133 [2024-07-16 01:17:47.876339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.133 [2024-07-16 01:17:47.884439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:32.133 [2024-07-16 01:17:47.884775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.133 [2024-07-16 01:17:47.884803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.133 [2024-07-16 01:17:47.891900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:32.133 [2024-07-16 01:17:47.892207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.133 [2024-07-16 01:17:47.892234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.133 [2024-07-16 01:17:47.900455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:32.133 [2024-07-16 01:17:47.900847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.133 [2024-07-16 01:17:47.900889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.133 [2024-07-16 01:17:47.907304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebc90) with pdu=0x2000190fef90 00:24:32.133 [2024-07-16 01:17:47.907620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.133 [2024-07-16 01:17:47.907647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.133 00:24:32.133 Latency(us) 00:24:32.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.133 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:32.133 nvme0n1 : 2.00 4004.69 500.59 0.00 0.00 3986.23 2949.12 11893.57 00:24:32.133 =================================================================================================================== 00:24:32.133 Total : 4004.69 500.59 0.00 0.00 3986.23 2949.12 11893.57 00:24:32.133 0 00:24:32.133 01:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:32.133 01:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:32.133 01:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:32.133 01:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:32.133 | .driver_specific 00:24:32.133 | .nvme_error 00:24:32.133 | .status_code 00:24:32.133 | .command_transient_transport_error' 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 258 > 0 )) 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 54293 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 54293 ']' 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 54293 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 54293 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54293' 00:24:32.409 killing process with pid 54293 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 54293 00:24:32.409 Received shutdown signal, test time was about 2.000000 seconds 00:24:32.409 00:24:32.409 Latency(us) 00:24:32.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.409 =================================================================================================================== 00:24:32.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.409 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 54293 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 52922 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 52922 ']' 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 52922 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 52922 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52922' 00:24:32.668 killing process with pid 52922 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 52922 00:24:32.668 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 52922 00:24:32.927 00:24:32.927 real 0m15.448s 00:24:32.927 user 0m30.127s 00:24:32.927 sys 0m4.356s 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:32.927 ************************************ 00:24:32.927 END TEST nvmf_digest_error 00:24:32.927 ************************************ 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:32.927 rmmod nvme_tcp 00:24:32.927 rmmod nvme_fabrics 00:24:32.927 rmmod nvme_keyring 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 52922 ']' 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 52922 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 52922 ']' 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 52922 00:24:32.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (52922) - No such process 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 52922 is not found' 00:24:32.927 Process with pid 52922 is not found 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.927 01:17:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.835 01:17:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.835 00:24:34.835 real 0m35.347s 00:24:34.835 user 1m0.602s 00:24:34.835 sys 0m10.573s 00:24:34.835 01:17:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.094 01:17:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:35.094 ************************************ 00:24:35.094 END TEST nvmf_digest 00:24:35.094 ************************************ 00:24:35.094 01:17:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:35.094 01:17:50 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:35.094 01:17:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:35.094 01:17:50 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:35.094 01:17:50 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:35.094 01:17:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:35.094 01:17:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.094 01:17:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.094 ************************************ 00:24:35.094 START TEST nvmf_bdevperf 00:24:35.094 ************************************ 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:35.094 * Looking for test storage... 00:24:35.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.094 01:17:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.626 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:37.627 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:37.627 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:37.627 Found net devices under 0000:09:00.0: cvl_0_0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:37.627 Found net devices under 0000:09:00.1: cvl_0_1 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:24:37.627 00:24:37.627 --- 10.0.0.2 ping statistics --- 00:24:37.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.627 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:24:37.627 00:24:37.627 --- 10.0.0.1 ping statistics --- 00:24:37.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.627 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=56647 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 56647 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 56647 ']' 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.627 [2024-07-16 01:17:53.255571] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:37.627 [2024-07-16 01:17:53.255652] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.627 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.627 [2024-07-16 01:17:53.317585] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:37.627 [2024-07-16 01:17:53.417861] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.627 [2024-07-16 01:17:53.417920] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.627 [2024-07-16 01:17:53.417947] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.627 [2024-07-16 01:17:53.417965] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.627 [2024-07-16 01:17:53.417975] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.627 [2024-07-16 01:17:53.418073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.627 [2024-07-16 01:17:53.418132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.627 [2024-07-16 01:17:53.418135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.627 [2024-07-16 01:17:53.566160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.627 Malloc0 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.627 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.886 [2024-07-16 01:17:53.631804] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.886 { 00:24:37.886 "params": { 00:24:37.886 "name": "Nvme$subsystem", 00:24:37.886 "trtype": "$TEST_TRANSPORT", 00:24:37.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.886 "adrfam": "ipv4", 00:24:37.886 "trsvcid": "$NVMF_PORT", 00:24:37.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.886 "hdgst": ${hdgst:-false}, 00:24:37.886 "ddgst": ${ddgst:-false} 00:24:37.886 }, 00:24:37.886 "method": "bdev_nvme_attach_controller" 00:24:37.886 } 00:24:37.886 EOF 00:24:37.886 )") 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:37.886 01:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:37.886 "params": { 00:24:37.886 "name": "Nvme1", 00:24:37.886 "trtype": "tcp", 00:24:37.886 "traddr": "10.0.0.2", 00:24:37.886 "adrfam": "ipv4", 00:24:37.886 "trsvcid": "4420", 00:24:37.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.886 "hdgst": false, 00:24:37.886 "ddgst": false 00:24:37.886 }, 00:24:37.886 "method": "bdev_nvme_attach_controller" 00:24:37.886 }' 00:24:37.886 [2024-07-16 01:17:53.679194] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:37.886 [2024-07-16 01:17:53.679288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56676 ] 00:24:37.886 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.886 [2024-07-16 01:17:53.741047] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.886 [2024-07-16 01:17:53.851606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.143 Running I/O for 1 seconds... 00:24:39.076 00:24:39.076 Latency(us) 00:24:39.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.076 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:39.076 Verification LBA range: start 0x0 length 0x4000 00:24:39.076 Nvme1n1 : 1.01 8001.00 31.25 0.00 0.00 15927.79 1553.45 15437.37 00:24:39.076 =================================================================================================================== 00:24:39.076 Total : 8001.00 31.25 0.00 0.00 15927.79 1553.45 15437.37 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=56935 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.333 { 00:24:39.333 "params": { 00:24:39.333 "name": "Nvme$subsystem", 00:24:39.333 "trtype": "$TEST_TRANSPORT", 00:24:39.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.333 "adrfam": "ipv4", 00:24:39.333 "trsvcid": "$NVMF_PORT", 00:24:39.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.333 "hdgst": ${hdgst:-false}, 00:24:39.333 "ddgst": ${ddgst:-false} 00:24:39.333 }, 00:24:39.333 "method": "bdev_nvme_attach_controller" 00:24:39.333 } 00:24:39.333 EOF 00:24:39.333 )") 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:39.333 01:17:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:39.333 "params": { 00:24:39.333 "name": "Nvme1", 00:24:39.333 "trtype": "tcp", 00:24:39.333 "traddr": "10.0.0.2", 00:24:39.333 "adrfam": "ipv4", 00:24:39.333 "trsvcid": "4420", 00:24:39.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.333 "hdgst": false, 00:24:39.333 "ddgst": false 00:24:39.333 }, 00:24:39.333 "method": "bdev_nvme_attach_controller" 00:24:39.333 }' 00:24:39.591 [2024-07-16 01:17:55.355104] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:39.591 [2024-07-16 01:17:55.355181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56935 ] 00:24:39.591 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.591 [2024-07-16 01:17:55.414592] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.591 [2024-07-16 01:17:55.523438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.847 Running I/O for 15 seconds... 00:24:42.372 01:17:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 56647 00:24:42.372 01:17:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:42.372 [2024-07-16 01:17:58.323332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.323935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.323986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.372 [2024-07-16 01:17:58.324672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.372 [2024-07-16 01:17:58.324696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.324717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.324742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.324764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.324789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.324809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.324834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.324855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.324892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.324914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.324953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.324988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.373 [2024-07-16 01:17:58.325760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.325805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.325850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.325894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.325952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.325988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.326964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.326991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.327952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.327991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.328024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.328050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.328075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.328101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.328126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.328151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.373 [2024-07-16 01:17:58.328178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.373 [2024-07-16 01:17:58.328203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.328926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.328981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.374 [2024-07-16 01:17:58.329714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.374 [2024-07-16 01:17:58.329758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.329781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbc80 is same with the state(5) to be set 00:24:42.374 [2024-07-16 01:17:58.329806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:42.374 [2024-07-16 01:17:58.329826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:42.374 [2024-07-16 01:17:58.329843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45680 len:8 PRP1 0x0 PRP2 0x0 00:24:42.374 [2024-07-16 01:17:58.329862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.330191] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14fbc80 was disconnected and freed. reset controller. 00:24:42.374 [2024-07-16 01:17:58.330307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.374 [2024-07-16 01:17:58.330333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.330371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.374 [2024-07-16 01:17:58.330394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.330418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.374 [2024-07-16 01:17:58.330440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.330462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.374 [2024-07-16 01:17:58.330484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.374 [2024-07-16 01:17:58.330504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.374 [2024-07-16 01:17:58.334529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.374 [2024-07-16 01:17:58.334573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.374 [2024-07-16 01:17:58.335377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.374 [2024-07-16 01:17:58.335416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.374 [2024-07-16 01:17:58.335443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.374 [2024-07-16 01:17:58.335764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.374 [2024-07-16 01:17:58.336063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.374 [2024-07-16 01:17:58.336093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.374 [2024-07-16 01:17:58.336121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.374 [2024-07-16 01:17:58.339696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.374 [2024-07-16 01:17:58.348052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.374 [2024-07-16 01:17:58.348431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.374 [2024-07-16 01:17:58.348462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.374 [2024-07-16 01:17:58.348488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.374 [2024-07-16 01:17:58.348751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.374 [2024-07-16 01:17:58.348988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.374 [2024-07-16 01:17:58.349014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.374 [2024-07-16 01:17:58.349037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.374 [2024-07-16 01:17:58.352072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.374 [2024-07-16 01:17:58.361476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.374 [2024-07-16 01:17:58.361849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.374 [2024-07-16 01:17:58.361880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.374 [2024-07-16 01:17:58.361905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.374 [2024-07-16 01:17:58.362197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.374 [2024-07-16 01:17:58.362456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.374 [2024-07-16 01:17:58.362479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.374 [2024-07-16 01:17:58.362500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.632 [2024-07-16 01:17:58.365812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.632 [2024-07-16 01:17:58.374595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.632 [2024-07-16 01:17:58.374944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.632 [2024-07-16 01:17:58.374997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.632 [2024-07-16 01:17:58.375025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.632 [2024-07-16 01:17:58.375308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.632 [2024-07-16 01:17:58.375519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.632 [2024-07-16 01:17:58.375540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.632 [2024-07-16 01:17:58.375560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.632 [2024-07-16 01:17:58.378500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.632 [2024-07-16 01:17:58.387765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.632 [2024-07-16 01:17:58.388170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.632 [2024-07-16 01:17:58.388202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.632 [2024-07-16 01:17:58.388230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.632 [2024-07-16 01:17:58.388515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.632 [2024-07-16 01:17:58.388722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.632 [2024-07-16 01:17:58.388743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.632 [2024-07-16 01:17:58.388762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.632 [2024-07-16 01:17:58.391736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.632 [2024-07-16 01:17:58.401024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.632 [2024-07-16 01:17:58.401458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.632 [2024-07-16 01:17:58.401489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.632 [2024-07-16 01:17:58.401516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.632 [2024-07-16 01:17:58.401796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.632 [2024-07-16 01:17:58.402031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.632 [2024-07-16 01:17:58.402053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.632 [2024-07-16 01:17:58.402073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.632 [2024-07-16 01:17:58.405026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.632 [2024-07-16 01:17:58.414221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.632 [2024-07-16 01:17:58.414680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.632 [2024-07-16 01:17:58.414712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.632 [2024-07-16 01:17:58.414739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.632 [2024-07-16 01:17:58.415031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.632 [2024-07-16 01:17:58.415271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.632 [2024-07-16 01:17:58.415307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.632 [2024-07-16 01:17:58.415327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.632 [2024-07-16 01:17:58.418364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.632 [2024-07-16 01:17:58.427227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.632 [2024-07-16 01:17:58.427663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.632 [2024-07-16 01:17:58.427715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.632 [2024-07-16 01:17:58.427739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.632 [2024-07-16 01:17:58.428013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.632 [2024-07-16 01:17:58.428227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.632 [2024-07-16 01:17:58.428263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.632 [2024-07-16 01:17:58.428283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.632 [2024-07-16 01:17:58.431081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.632 [2024-07-16 01:17:58.440401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.632 [2024-07-16 01:17:58.440842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.632 [2024-07-16 01:17:58.440873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.632 [2024-07-16 01:17:58.440898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.632 [2024-07-16 01:17:58.441206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.632 [2024-07-16 01:17:58.441447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.632 [2024-07-16 01:17:58.441468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.632 [2024-07-16 01:17:58.441487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.632 [2024-07-16 01:17:58.444519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.632 [2024-07-16 01:17:58.453498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.632 [2024-07-16 01:17:58.453934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.632 [2024-07-16 01:17:58.453989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.632 [2024-07-16 01:17:58.454018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.454320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.454527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.454549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.454568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.457537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.466755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.467241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.467303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.467331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.467623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.467836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.467858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.467878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.470847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.479902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.480307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.480338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.480363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.480627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.480833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.480855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.480874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.483749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.493076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.493530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.493580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.493606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.493883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.494122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.494144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.494165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.497098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.506354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.506732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.506764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.506791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.507082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.507314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.507335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.507355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.510320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.519507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.519947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.520000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.520027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.520310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.520517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.520539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.520558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.523519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.532572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.533040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.533072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.533099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.533396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.533602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.533623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.533643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.536598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.545803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.546187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.546218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.546244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.546524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.546730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.546751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.546770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.549608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.558781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.559225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.559257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.559283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.559562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.559769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.559791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.559809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.562768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.571806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.572247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.572280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.572306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.572588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.572796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.572817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.572836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.575778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.585274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.585744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.585777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.585804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.633 [2024-07-16 01:17:58.586130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.633 [2024-07-16 01:17:58.586401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.633 [2024-07-16 01:17:58.586424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.633 [2024-07-16 01:17:58.586445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.633 [2024-07-16 01:17:58.589568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.633 [2024-07-16 01:17:58.598582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.633 [2024-07-16 01:17:58.599089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.633 [2024-07-16 01:17:58.599126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.633 [2024-07-16 01:17:58.599161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.634 [2024-07-16 01:17:58.599444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.634 [2024-07-16 01:17:58.599669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.634 [2024-07-16 01:17:58.599690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.634 [2024-07-16 01:17:58.599710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.634 [2024-07-16 01:17:58.602818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.634 [2024-07-16 01:17:58.611666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.634 [2024-07-16 01:17:58.612107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.634 [2024-07-16 01:17:58.612139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.634 [2024-07-16 01:17:58.612166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.634 [2024-07-16 01:17:58.612450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.634 [2024-07-16 01:17:58.612656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.634 [2024-07-16 01:17:58.612677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.634 [2024-07-16 01:17:58.612696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.634 [2024-07-16 01:17:58.615538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.891 [2024-07-16 01:17:58.625392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.891 [2024-07-16 01:17:58.625834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.891 [2024-07-16 01:17:58.625866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.891 [2024-07-16 01:17:58.625893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.891 [2024-07-16 01:17:58.626169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.891 [2024-07-16 01:17:58.626416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.891 [2024-07-16 01:17:58.626438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.891 [2024-07-16 01:17:58.626458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.891 [2024-07-16 01:17:58.629507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.891 [2024-07-16 01:17:58.638372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.891 [2024-07-16 01:17:58.638711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.891 [2024-07-16 01:17:58.638742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.891 [2024-07-16 01:17:58.638768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.891 [2024-07-16 01:17:58.639027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.891 [2024-07-16 01:17:58.639247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.891 [2024-07-16 01:17:58.639289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.891 [2024-07-16 01:17:58.639309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.891 [2024-07-16 01:17:58.642208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.891 [2024-07-16 01:17:58.651499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.891 [2024-07-16 01:17:58.651904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.891 [2024-07-16 01:17:58.651934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.891 [2024-07-16 01:17:58.651983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.891 [2024-07-16 01:17:58.652278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.891 [2024-07-16 01:17:58.652516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.891 [2024-07-16 01:17:58.652537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.891 [2024-07-16 01:17:58.652557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.891 [2024-07-16 01:17:58.655471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.891 [2024-07-16 01:17:58.664594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.891 [2024-07-16 01:17:58.665002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.891 [2024-07-16 01:17:58.665033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.891 [2024-07-16 01:17:58.665059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.891 [2024-07-16 01:17:58.665322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.891 [2024-07-16 01:17:58.665528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.891 [2024-07-16 01:17:58.665549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.891 [2024-07-16 01:17:58.665569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.891 [2024-07-16 01:17:58.668550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.891 [2024-07-16 01:17:58.677704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.891 [2024-07-16 01:17:58.678141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.891 [2024-07-16 01:17:58.678172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.891 [2024-07-16 01:17:58.678199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.891 [2024-07-16 01:17:58.678484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.891 [2024-07-16 01:17:58.678691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.891 [2024-07-16 01:17:58.678713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.891 [2024-07-16 01:17:58.678733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.891 [2024-07-16 01:17:58.681569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.891 [2024-07-16 01:17:58.690821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.891 [2024-07-16 01:17:58.691239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.891 [2024-07-16 01:17:58.691286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.891 [2024-07-16 01:17:58.691312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.891 [2024-07-16 01:17:58.691591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.891 [2024-07-16 01:17:58.691798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.891 [2024-07-16 01:17:58.691818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.891 [2024-07-16 01:17:58.691837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.891 [2024-07-16 01:17:58.694779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.891 [2024-07-16 01:17:58.703928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.891 [2024-07-16 01:17:58.704331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.891 [2024-07-16 01:17:58.704362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.891 [2024-07-16 01:17:58.704388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.891 [2024-07-16 01:17:58.704667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.704873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.704894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.704914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.707855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.717015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.717451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.717482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.717508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.717791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.718024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.718060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.718081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.721003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.730146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.730490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.730519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.730544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.730806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.731054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.731076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.731097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.734016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.743198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.743640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.743699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.743725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.744056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.744308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.744345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.744366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.747398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.756328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.756706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.756736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.756762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.757042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.757254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.757290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.757310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.760153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.769465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.769843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.769873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.769899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.770211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.770449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.770470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.770495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.773418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.782564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.782953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.782989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.783014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.783271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.783477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.783498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.783517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.786357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.795571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.795961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.796007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.796033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.796315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.796521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.796542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.796561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.799504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.808764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.809181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.809211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.809236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.809503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.809709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.809730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.809749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.812696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.821844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.822229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.822260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.822286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.822566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.822772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.822793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.822812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.825735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.835080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.835545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.835576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.835603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.892 [2024-07-16 01:17:58.835888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.892 [2024-07-16 01:17:58.836142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.892 [2024-07-16 01:17:58.836165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.892 [2024-07-16 01:17:58.836186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.892 [2024-07-16 01:17:58.839245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.892 [2024-07-16 01:17:58.848503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.892 [2024-07-16 01:17:58.848945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.892 [2024-07-16 01:17:58.849002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.892 [2024-07-16 01:17:58.849030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.893 [2024-07-16 01:17:58.849327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.893 [2024-07-16 01:17:58.849533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.893 [2024-07-16 01:17:58.849554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.893 [2024-07-16 01:17:58.849573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.893 [2024-07-16 01:17:58.852529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.893 [2024-07-16 01:17:58.861563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.893 [2024-07-16 01:17:58.862001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.893 [2024-07-16 01:17:58.862032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.893 [2024-07-16 01:17:58.862058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.893 [2024-07-16 01:17:58.862339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.893 [2024-07-16 01:17:58.862551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.893 [2024-07-16 01:17:58.862572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.893 [2024-07-16 01:17:58.862591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.893 [2024-07-16 01:17:58.865475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.893 [2024-07-16 01:17:58.874740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.893 [2024-07-16 01:17:58.875142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.893 [2024-07-16 01:17:58.875172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:42.893 [2024-07-16 01:17:58.875198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:42.893 [2024-07-16 01:17:58.875488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:42.893 [2024-07-16 01:17:58.875694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.893 [2024-07-16 01:17:58.875715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.893 [2024-07-16 01:17:58.875734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.893 [2024-07-16 01:17:58.878611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.153 [2024-07-16 01:17:58.888047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.153 [2024-07-16 01:17:58.888565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.153 [2024-07-16 01:17:58.888626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.153 [2024-07-16 01:17:58.888653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.153 [2024-07-16 01:17:58.888930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.153 [2024-07-16 01:17:58.889208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.153 [2024-07-16 01:17:58.889233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.153 [2024-07-16 01:17:58.889255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.153 [2024-07-16 01:17:58.892249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.153 [2024-07-16 01:17:58.901266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.153 [2024-07-16 01:17:58.901651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.153 [2024-07-16 01:17:58.901681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.153 [2024-07-16 01:17:58.901706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.153 [2024-07-16 01:17:58.901993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.902224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.902246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.902267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.905185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:58.914353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:58.914730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:58.914762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:58.914788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:58.915066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.915298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.915320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.915339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.918235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:58.927407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:58.927781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:58.927812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:58.927838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:58.928118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.928351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.928373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.928393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.931308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:58.940394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:58.940835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:58.940866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:58.940892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:58.941188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.941412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.941433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.941452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.944365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:58.953444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:58.953818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:58.953849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:58.953881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:58.954189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.954435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.954456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.954475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.957499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:58.966654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:58.967036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:58.967068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:58.967096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:58.967383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.967592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.967613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.967632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.970549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:58.979847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:58.980297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:58.980327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:58.980353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:58.980629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.980836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.980857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.980876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.983836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:58.993044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:58.993445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:58.993476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:58.993501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:58.993781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:58.994013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:58.994055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:58.994076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:58.996998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:59.006170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:59.006617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:59.006648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:59.006675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:59.006967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:59.007199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:59.007221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:59.007242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:59.010156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:59.019370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:59.019752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:59.019782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:59.019807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:59.020100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:59.020331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:59.020353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:59.020386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:59.023301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:59.032493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:59.032838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.154 [2024-07-16 01:17:59.032868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.154 [2024-07-16 01:17:59.032893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.154 [2024-07-16 01:17:59.033205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.154 [2024-07-16 01:17:59.033451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.154 [2024-07-16 01:17:59.033472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.154 [2024-07-16 01:17:59.033491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.154 [2024-07-16 01:17:59.036410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.154 [2024-07-16 01:17:59.045584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.154 [2024-07-16 01:17:59.046006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.046038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.046064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.046358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.046564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.046586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.046605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.049546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.155 [2024-07-16 01:17:59.058703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.155 [2024-07-16 01:17:59.059080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.059111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.059136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.059417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.059622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.059643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.059663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.062605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.155 [2024-07-16 01:17:59.071893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.155 [2024-07-16 01:17:59.072435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.072467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.072495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.072784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.073024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.073047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.073067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.076065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.155 [2024-07-16 01:17:59.084992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.155 [2024-07-16 01:17:59.085415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.085445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.085475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.085742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.085972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.085994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.086028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.089090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.155 [2024-07-16 01:17:59.098280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.155 [2024-07-16 01:17:59.098690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.098722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.098748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.099034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.099262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.099284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.099319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.102323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.155 [2024-07-16 01:17:59.111488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.155 [2024-07-16 01:17:59.111875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.111906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.111932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.112243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.112481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.112502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.112522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.115440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.155 [2024-07-16 01:17:59.124718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.155 [2024-07-16 01:17:59.125191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.125223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.125251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.125535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.125741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.125766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.125786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.128760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.155 [2024-07-16 01:17:59.137967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.155 [2024-07-16 01:17:59.138419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.155 [2024-07-16 01:17:59.138450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.155 [2024-07-16 01:17:59.138476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.155 [2024-07-16 01:17:59.138756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.155 [2024-07-16 01:17:59.139000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.155 [2024-07-16 01:17:59.139038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.155 [2024-07-16 01:17:59.139059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.155 [2024-07-16 01:17:59.142379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.481 [2024-07-16 01:17:59.151730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.481 [2024-07-16 01:17:59.152085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.481 [2024-07-16 01:17:59.152118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.481 [2024-07-16 01:17:59.152145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.481 [2024-07-16 01:17:59.152416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.481 [2024-07-16 01:17:59.152650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.481 [2024-07-16 01:17:59.152674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.481 [2024-07-16 01:17:59.152696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.481 [2024-07-16 01:17:59.156051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.481 [2024-07-16 01:17:59.165037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.481 [2024-07-16 01:17:59.165488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.481 [2024-07-16 01:17:59.165518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.481 [2024-07-16 01:17:59.165545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.481 [2024-07-16 01:17:59.165828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.481 [2024-07-16 01:17:59.166076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.481 [2024-07-16 01:17:59.166099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.481 [2024-07-16 01:17:59.166119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.481 [2024-07-16 01:17:59.169102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.481 [2024-07-16 01:17:59.178396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.481 [2024-07-16 01:17:59.178848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.481 [2024-07-16 01:17:59.178879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.481 [2024-07-16 01:17:59.178906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.481 [2024-07-16 01:17:59.179186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.481 [2024-07-16 01:17:59.179430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.481 [2024-07-16 01:17:59.179452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.481 [2024-07-16 01:17:59.179472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.481 [2024-07-16 01:17:59.182513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.481 [2024-07-16 01:17:59.191590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.481 [2024-07-16 01:17:59.192028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.481 [2024-07-16 01:17:59.192060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.481 [2024-07-16 01:17:59.192086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.481 [2024-07-16 01:17:59.192367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.481 [2024-07-16 01:17:59.192574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.481 [2024-07-16 01:17:59.192595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.481 [2024-07-16 01:17:59.192614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.481 [2024-07-16 01:17:59.195561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.481 [2024-07-16 01:17:59.204637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.481 [2024-07-16 01:17:59.205013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.481 [2024-07-16 01:17:59.205044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.481 [2024-07-16 01:17:59.205070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.481 [2024-07-16 01:17:59.205352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.481 [2024-07-16 01:17:59.205558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.481 [2024-07-16 01:17:59.205580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.481 [2024-07-16 01:17:59.205599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.481 [2024-07-16 01:17:59.208541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.481 [2024-07-16 01:17:59.217648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.481 [2024-07-16 01:17:59.218050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.481 [2024-07-16 01:17:59.218081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.481 [2024-07-16 01:17:59.218106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.481 [2024-07-16 01:17:59.218375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.481 [2024-07-16 01:17:59.218581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.481 [2024-07-16 01:17:59.218603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.481 [2024-07-16 01:17:59.218623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.481 [2024-07-16 01:17:59.221585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.230706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.231150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.231182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.231209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.231494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.231700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.231721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.231740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.234579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.243746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.244154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.244185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.244210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.244474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.244679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.244701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.244719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.247559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.256816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.257331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.257363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.257390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.257674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.257881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.257902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.257927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.260879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.269835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.270245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.270276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.270301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.270565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.270771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.270792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.270812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.273752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.283061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.283528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.283560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.283586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.283874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.284133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.284156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.284177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.287133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.296073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.296468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.296499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.296525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.296808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.297038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.297060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.297080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.299992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.309189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.309605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.309640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.309665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.309928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.310182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.310204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.310224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.313130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.322295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.322670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.322702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.322728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.323004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.323216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.323238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.323271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.326067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.335457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.335914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.335982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.336023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.336307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.336562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.336585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.336606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.339965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.348901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.482 [2024-07-16 01:17:59.349499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.482 [2024-07-16 01:17:59.349531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.482 [2024-07-16 01:17:59.349559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.482 [2024-07-16 01:17:59.349840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.482 [2024-07-16 01:17:59.350108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.482 [2024-07-16 01:17:59.350134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.482 [2024-07-16 01:17:59.350157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.482 [2024-07-16 01:17:59.353286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.482 [2024-07-16 01:17:59.362623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.483 [2024-07-16 01:17:59.363042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.483 [2024-07-16 01:17:59.363075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.483 [2024-07-16 01:17:59.363102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.483 [2024-07-16 01:17:59.363388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.483 [2024-07-16 01:17:59.363624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.483 [2024-07-16 01:17:59.363647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.483 [2024-07-16 01:17:59.363668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.483 [2024-07-16 01:17:59.367065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.483 [2024-07-16 01:17:59.376187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.483 [2024-07-16 01:17:59.376612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.483 [2024-07-16 01:17:59.376645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.483 [2024-07-16 01:17:59.376672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.483 [2024-07-16 01:17:59.376975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.483 [2024-07-16 01:17:59.377215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.483 [2024-07-16 01:17:59.377240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.483 [2024-07-16 01:17:59.377279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.483 [2024-07-16 01:17:59.380590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.483 [2024-07-16 01:17:59.389722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.483 [2024-07-16 01:17:59.390086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.483 [2024-07-16 01:17:59.390118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.483 [2024-07-16 01:17:59.390145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.483 [2024-07-16 01:17:59.390427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.483 [2024-07-16 01:17:59.390671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.483 [2024-07-16 01:17:59.390693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.483 [2024-07-16 01:17:59.390713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.483 [2024-07-16 01:17:59.394037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.483 [2024-07-16 01:17:59.403131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.483 [2024-07-16 01:17:59.403624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.483 [2024-07-16 01:17:59.403657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.483 [2024-07-16 01:17:59.403684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.483 [2024-07-16 01:17:59.403983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.483 [2024-07-16 01:17:59.404242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.483 [2024-07-16 01:17:59.404267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.483 [2024-07-16 01:17:59.404290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.483 [2024-07-16 01:17:59.407609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.483 [2024-07-16 01:17:59.416805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.483 [2024-07-16 01:17:59.417199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.483 [2024-07-16 01:17:59.417256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.483 [2024-07-16 01:17:59.417302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.483 [2024-07-16 01:17:59.417578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.483 [2024-07-16 01:17:59.417797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.483 [2024-07-16 01:17:59.417819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.483 [2024-07-16 01:17:59.417839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.483 [2024-07-16 01:17:59.421149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.483 [2024-07-16 01:17:59.430401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.483 [2024-07-16 01:17:59.430837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.483 [2024-07-16 01:17:59.430869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.483 [2024-07-16 01:17:59.430895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.483 [2024-07-16 01:17:59.431160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.483 [2024-07-16 01:17:59.431418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.483 [2024-07-16 01:17:59.431441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.483 [2024-07-16 01:17:59.431461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.483 [2024-07-16 01:17:59.434577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.483 [2024-07-16 01:17:59.443930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.483 [2024-07-16 01:17:59.444416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.483 [2024-07-16 01:17:59.444448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.483 [2024-07-16 01:17:59.444483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.483 [2024-07-16 01:17:59.444764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.483 [2024-07-16 01:17:59.445020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.483 [2024-07-16 01:17:59.445045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.483 [2024-07-16 01:17:59.445068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.483 [2024-07-16 01:17:59.448455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.741 [2024-07-16 01:17:59.457474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.741 [2024-07-16 01:17:59.457927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.741 [2024-07-16 01:17:59.457966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.741 [2024-07-16 01:17:59.457995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.741 [2024-07-16 01:17:59.458265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.741 [2024-07-16 01:17:59.458488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.741 [2024-07-16 01:17:59.458509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.741 [2024-07-16 01:17:59.458528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.741 [2024-07-16 01:17:59.461568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.741 [2024-07-16 01:17:59.470696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.741 [2024-07-16 01:17:59.471127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.741 [2024-07-16 01:17:59.471159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.741 [2024-07-16 01:17:59.471186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.741 [2024-07-16 01:17:59.471468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.741 [2024-07-16 01:17:59.471673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.741 [2024-07-16 01:17:59.471694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.741 [2024-07-16 01:17:59.471714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.741 [2024-07-16 01:17:59.474678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.741 [2024-07-16 01:17:59.484079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.741 [2024-07-16 01:17:59.484566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.741 [2024-07-16 01:17:59.484598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.741 [2024-07-16 01:17:59.484623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.741 [2024-07-16 01:17:59.484903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.741 [2024-07-16 01:17:59.485147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.741 [2024-07-16 01:17:59.485175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.741 [2024-07-16 01:17:59.485197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.741 [2024-07-16 01:17:59.488208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.741 [2024-07-16 01:17:59.497324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.741 [2024-07-16 01:17:59.497702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.741 [2024-07-16 01:17:59.497733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.741 [2024-07-16 01:17:59.497759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.498035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.498262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.498283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.498302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.501242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.510467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.510841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.510873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.510898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.511197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.511441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.511462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.511481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.514421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.523632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.524017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.524064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.524092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.524380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.524587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.524608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.524627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.527581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.536818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.537222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.537254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.537296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.537574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.537782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.537804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.537823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.540776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.550025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.550443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.550498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.550525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.550801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.551036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.551059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.551079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.553946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.563212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.563619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.563671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.563697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.563999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.564212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.564234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.564254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.567179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.576404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.576810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.576864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.576890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.577171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.577410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.577432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.577451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.580389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.589613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.590070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.590103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.590131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.590421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.590649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.590671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.590691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.594067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.602699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.603119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.603152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.603178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.603450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.603655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.603676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.603695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.606653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.615843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.616373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.742 [2024-07-16 01:17:59.616404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.742 [2024-07-16 01:17:59.616431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.742 [2024-07-16 01:17:59.616722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.742 [2024-07-16 01:17:59.616928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.742 [2024-07-16 01:17:59.616949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.742 [2024-07-16 01:17:59.616996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.742 [2024-07-16 01:17:59.619926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.742 [2024-07-16 01:17:59.629008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.742 [2024-07-16 01:17:59.629511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.629569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.629595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.629863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.630111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.630134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.630156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.633090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.743 [2024-07-16 01:17:59.642181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.743 [2024-07-16 01:17:59.642575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.642606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.642632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.642912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.643145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.643167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.643187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.646141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.743 [2024-07-16 01:17:59.655359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.743 [2024-07-16 01:17:59.655721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.655777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.655802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.656101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.656352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.656373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.656392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.659352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.743 [2024-07-16 01:17:59.668348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.743 [2024-07-16 01:17:59.668754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.668784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.668810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.669086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.669319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.669353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.669372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.672289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.743 [2024-07-16 01:17:59.681987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.743 [2024-07-16 01:17:59.682452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.682484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.682511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.682800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.683055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.683079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.683099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.686107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.743 [2024-07-16 01:17:59.695139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.743 [2024-07-16 01:17:59.695521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.695552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.695577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.695842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.696075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.696097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.696116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.699031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.743 [2024-07-16 01:17:59.708306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.743 [2024-07-16 01:17:59.708683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.708714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.708740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.709042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.709275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.709296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.709329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.712284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.743 [2024-07-16 01:17:59.721504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.743 [2024-07-16 01:17:59.721960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.743 [2024-07-16 01:17:59.722007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:43.743 [2024-07-16 01:17:59.722034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:43.743 [2024-07-16 01:17:59.722342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:43.743 [2024-07-16 01:17:59.722549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.743 [2024-07-16 01:17:59.722570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.743 [2024-07-16 01:17:59.722589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.743 [2024-07-16 01:17:59.725555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.001 [2024-07-16 01:17:59.735211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.001 [2024-07-16 01:17:59.735639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.001 [2024-07-16 01:17:59.735670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.001 [2024-07-16 01:17:59.735697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.001 [2024-07-16 01:17:59.736003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.001 [2024-07-16 01:17:59.736228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.001 [2024-07-16 01:17:59.736252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.001 [2024-07-16 01:17:59.736273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.001 [2024-07-16 01:17:59.739330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.001 [2024-07-16 01:17:59.748302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.001 [2024-07-16 01:17:59.748677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.001 [2024-07-16 01:17:59.748707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.001 [2024-07-16 01:17:59.748733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.001 [2024-07-16 01:17:59.749008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.001 [2024-07-16 01:17:59.749220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.001 [2024-07-16 01:17:59.749242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.001 [2024-07-16 01:17:59.749281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.001 [2024-07-16 01:17:59.752077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.001 [2024-07-16 01:17:59.761396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.001 [2024-07-16 01:17:59.761776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.001 [2024-07-16 01:17:59.761806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.001 [2024-07-16 01:17:59.761832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.001 [2024-07-16 01:17:59.762124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.001 [2024-07-16 01:17:59.762368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.001 [2024-07-16 01:17:59.762389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.001 [2024-07-16 01:17:59.762407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.001 [2024-07-16 01:17:59.765322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.001 [2024-07-16 01:17:59.774432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.001 [2024-07-16 01:17:59.774923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.001 [2024-07-16 01:17:59.774981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.001 [2024-07-16 01:17:59.775009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.001 [2024-07-16 01:17:59.775274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.001 [2024-07-16 01:17:59.775481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.775502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.775521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.778360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.787526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.787899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.787998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.788024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.788291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.788497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.788518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.788536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.791373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.800619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.801038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.801073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.801098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.801373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.801579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.801600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.801619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.804558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.813669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.814055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.814084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.814109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.814353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.814574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.814595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.814614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.817539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.826849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.827230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.827260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.827285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.827561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.827768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.827789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.827808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.830729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.840172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.840659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.840690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.840716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.841013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.841261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.841298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.841318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.844635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.853356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.853732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.853762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.853788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.854078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.854319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.854356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.854375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.857324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.866509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.866884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.866915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.866941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.867222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.867459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.867481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.867500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.870413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.879558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.879993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.880024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.880051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.880331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.880537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.880558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.880577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.883584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.892692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.893032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.893062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.893088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.893353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.893559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.893580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.893599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.896438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.905757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.906198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.906229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.906254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.906528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.002 [2024-07-16 01:17:59.906734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.002 [2024-07-16 01:17:59.906755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.002 [2024-07-16 01:17:59.906775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.002 [2024-07-16 01:17:59.909650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.002 [2024-07-16 01:17:59.918809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.002 [2024-07-16 01:17:59.919218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.002 [2024-07-16 01:17:59.919248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.002 [2024-07-16 01:17:59.919272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.002 [2024-07-16 01:17:59.919536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.003 [2024-07-16 01:17:59.919742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.003 [2024-07-16 01:17:59.919763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.003 [2024-07-16 01:17:59.919782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.003 [2024-07-16 01:17:59.922725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.003 [2024-07-16 01:17:59.931839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.003 [2024-07-16 01:17:59.932232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.003 [2024-07-16 01:17:59.932261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.003 [2024-07-16 01:17:59.932291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.003 [2024-07-16 01:17:59.932547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.003 [2024-07-16 01:17:59.932753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.003 [2024-07-16 01:17:59.932774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.003 [2024-07-16 01:17:59.932793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.003 [2024-07-16 01:17:59.935736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.003 [2024-07-16 01:17:59.944848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.003 [2024-07-16 01:17:59.945236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.003 [2024-07-16 01:17:59.945268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.003 [2024-07-16 01:17:59.945295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.003 [2024-07-16 01:17:59.945575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.003 [2024-07-16 01:17:59.945781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.003 [2024-07-16 01:17:59.945802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.003 [2024-07-16 01:17:59.945821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.003 [2024-07-16 01:17:59.948760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.003 [2024-07-16 01:17:59.957871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.003 [2024-07-16 01:17:59.958257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.003 [2024-07-16 01:17:59.958289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.003 [2024-07-16 01:17:59.958315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.003 [2024-07-16 01:17:59.958600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.003 [2024-07-16 01:17:59.958807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.003 [2024-07-16 01:17:59.958828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.003 [2024-07-16 01:17:59.958847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.003 [2024-07-16 01:17:59.961798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.003 [2024-07-16 01:17:59.970912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.003 [2024-07-16 01:17:59.971264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.003 [2024-07-16 01:17:59.971295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.003 [2024-07-16 01:17:59.971320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.003 [2024-07-16 01:17:59.971586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.003 [2024-07-16 01:17:59.971792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.003 [2024-07-16 01:17:59.971818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.003 [2024-07-16 01:17:59.971840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.003 [2024-07-16 01:17:59.974800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.003 [2024-07-16 01:17:59.984294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.003 [2024-07-16 01:17:59.984746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.003 [2024-07-16 01:17:59.984777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.003 [2024-07-16 01:17:59.984804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.003 [2024-07-16 01:17:59.985091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.003 [2024-07-16 01:17:59.985353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.003 [2024-07-16 01:17:59.985374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.003 [2024-07-16 01:17:59.985393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.003 [2024-07-16 01:17:59.988453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.262 [2024-07-16 01:17:59.997747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.262 [2024-07-16 01:17:59.998127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.262 [2024-07-16 01:17:59.998159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.262 [2024-07-16 01:17:59.998187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.262 [2024-07-16 01:17:59.998478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.262 [2024-07-16 01:17:59.998728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.262 [2024-07-16 01:17:59.998751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.262 [2024-07-16 01:17:59.998772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.262 [2024-07-16 01:18:00.002169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.262 [2024-07-16 01:18:00.011966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.262 [2024-07-16 01:18:00.012492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.262 [2024-07-16 01:18:00.012528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.262 [2024-07-16 01:18:00.012557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.262 [2024-07-16 01:18:00.012862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.262 [2024-07-16 01:18:00.013126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.262 [2024-07-16 01:18:00.013152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.262 [2024-07-16 01:18:00.013175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.262 [2024-07-16 01:18:00.016385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.262 [2024-07-16 01:18:00.025490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.262 [2024-07-16 01:18:00.025874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.262 [2024-07-16 01:18:00.025907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.262 [2024-07-16 01:18:00.025935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.262 [2024-07-16 01:18:00.026223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.262 [2024-07-16 01:18:00.026476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.262 [2024-07-16 01:18:00.026499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.262 [2024-07-16 01:18:00.026521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.262 [2024-07-16 01:18:00.029705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.262 [2024-07-16 01:18:00.038928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.262 [2024-07-16 01:18:00.039376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.262 [2024-07-16 01:18:00.039406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.262 [2024-07-16 01:18:00.039432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.262 [2024-07-16 01:18:00.039695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.262 [2024-07-16 01:18:00.039923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.262 [2024-07-16 01:18:00.039946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.262 [2024-07-16 01:18:00.040003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.262 [2024-07-16 01:18:00.043091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.262 [2024-07-16 01:18:00.052303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.262 [2024-07-16 01:18:00.052666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.262 [2024-07-16 01:18:00.052697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.262 [2024-07-16 01:18:00.052722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.262 [2024-07-16 01:18:00.052985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.053235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.053258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.053279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.056407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.065440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.065878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.065909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.065950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.066265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.066486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.066507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.066527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.069482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.078573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.078953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.078991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.079027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.079312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.079518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.079540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.079559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.082570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.091781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.092206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.092237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.092262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.092529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.092736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.092757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.092776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.095921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.104974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.105383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.105415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.105441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.105722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.105927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.105948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.106009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.108953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.118245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.118698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.118728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.118754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.119027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.119239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.119275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.119295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.122232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.131471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.131879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.131909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.131934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.132229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.132451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.132473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.132492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.135423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.144645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.145056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.145105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.145132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.145403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.145610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.145631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.145650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.148565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.157725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.158251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.158307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.158333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.158619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.158825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.158846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.158865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.161926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.170940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.171323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.171353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.171378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.171638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.171845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.171867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.171886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.174914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.184274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.184790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.184857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.184883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.263 [2024-07-16 01:18:00.185180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.263 [2024-07-16 01:18:00.185406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.263 [2024-07-16 01:18:00.185427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.263 [2024-07-16 01:18:00.185446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.263 [2024-07-16 01:18:00.188417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.263 [2024-07-16 01:18:00.197478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.263 [2024-07-16 01:18:00.197853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.263 [2024-07-16 01:18:00.197884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.263 [2024-07-16 01:18:00.197910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.264 [2024-07-16 01:18:00.198253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.264 [2024-07-16 01:18:00.198474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.264 [2024-07-16 01:18:00.198495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.264 [2024-07-16 01:18:00.198514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.264 [2024-07-16 01:18:00.201393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.264 [2024-07-16 01:18:00.210659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.264 [2024-07-16 01:18:00.211102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.264 [2024-07-16 01:18:00.211132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.264 [2024-07-16 01:18:00.211158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.264 [2024-07-16 01:18:00.211434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.264 [2024-07-16 01:18:00.211640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.264 [2024-07-16 01:18:00.211661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.264 [2024-07-16 01:18:00.211680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.264 [2024-07-16 01:18:00.214636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.264 [2024-07-16 01:18:00.223942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.264 [2024-07-16 01:18:00.224410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.264 [2024-07-16 01:18:00.224441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.264 [2024-07-16 01:18:00.224467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.264 [2024-07-16 01:18:00.224759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.264 [2024-07-16 01:18:00.224998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.264 [2024-07-16 01:18:00.225020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.264 [2024-07-16 01:18:00.225040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.264 [2024-07-16 01:18:00.227933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.264 [2024-07-16 01:18:00.237000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.264 [2024-07-16 01:18:00.237394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.264 [2024-07-16 01:18:00.237425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.264 [2024-07-16 01:18:00.237452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.264 [2024-07-16 01:18:00.237733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.264 [2024-07-16 01:18:00.237953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.264 [2024-07-16 01:18:00.237986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.264 [2024-07-16 01:18:00.238021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.264 [2024-07-16 01:18:00.240909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.264 [2024-07-16 01:18:00.250034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.264 [2024-07-16 01:18:00.250431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.264 [2024-07-16 01:18:00.250461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.264 [2024-07-16 01:18:00.250485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.264 [2024-07-16 01:18:00.250740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.264 [2024-07-16 01:18:00.250946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.264 [2024-07-16 01:18:00.250992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.264 [2024-07-16 01:18:00.251013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.264 [2024-07-16 01:18:00.254456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.522 [2024-07-16 01:18:00.263481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.522 [2024-07-16 01:18:00.263884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-16 01:18:00.263914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.522 [2024-07-16 01:18:00.263970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.522 [2024-07-16 01:18:00.264259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.522 [2024-07-16 01:18:00.264499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.522 [2024-07-16 01:18:00.264520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.522 [2024-07-16 01:18:00.264539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.522 [2024-07-16 01:18:00.267455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.522 [2024-07-16 01:18:00.276737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.522 [2024-07-16 01:18:00.277191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-16 01:18:00.277248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.522 [2024-07-16 01:18:00.277275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.522 [2024-07-16 01:18:00.277561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.522 [2024-07-16 01:18:00.277767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.522 [2024-07-16 01:18:00.277788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.522 [2024-07-16 01:18:00.277808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.522 [2024-07-16 01:18:00.280735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.522 [2024-07-16 01:18:00.289872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.522 [2024-07-16 01:18:00.290262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-16 01:18:00.290324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.522 [2024-07-16 01:18:00.290350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.522 [2024-07-16 01:18:00.290629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.522 [2024-07-16 01:18:00.290835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.522 [2024-07-16 01:18:00.290857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.522 [2024-07-16 01:18:00.290876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.522 [2024-07-16 01:18:00.293802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.522 [2024-07-16 01:18:00.303032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.522 [2024-07-16 01:18:00.303446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-16 01:18:00.303477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.303503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.303782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.304033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.304056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.304078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.307025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.316259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.316707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.316762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.316787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.317080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.317307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.317328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.317347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.320291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.329514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.329986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.330019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.330046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.330327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.330537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.330559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.330578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.333499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.342542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.343018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.343051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.343087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.343404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.343616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.343638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.343657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.346717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.355992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.356422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.356476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.356503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.356777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.357012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.357034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.357054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.359975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.369553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.369930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.369985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.370023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.370307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.370528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.370550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.370570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.373515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.382709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.383089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.383120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.383146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.383427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.383636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.383657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.383677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.386637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.395858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.396241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.396272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.396298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.396578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.396784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.396805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.396825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.399780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.409107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.409586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.409616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.409642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.409923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.410156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.410178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.410198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.413128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.422255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.422673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.422703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.422734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.423022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.423243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.423266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.423286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.426221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.435314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.435715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-16 01:18:00.435745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.523 [2024-07-16 01:18:00.435770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.523 [2024-07-16 01:18:00.436053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.523 [2024-07-16 01:18:00.436266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.523 [2024-07-16 01:18:00.436301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.523 [2024-07-16 01:18:00.436320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.523 [2024-07-16 01:18:00.439120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.523 [2024-07-16 01:18:00.448571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.523 [2024-07-16 01:18:00.449078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-16 01:18:00.449110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.524 [2024-07-16 01:18:00.449138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.524 [2024-07-16 01:18:00.449432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.524 [2024-07-16 01:18:00.449639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.524 [2024-07-16 01:18:00.449660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.524 [2024-07-16 01:18:00.449680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.524 [2024-07-16 01:18:00.452653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.524 [2024-07-16 01:18:00.461778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.524 [2024-07-16 01:18:00.462245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-16 01:18:00.462311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.524 [2024-07-16 01:18:00.462336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.524 [2024-07-16 01:18:00.462589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.524 [2024-07-16 01:18:00.462798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.524 [2024-07-16 01:18:00.462824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.524 [2024-07-16 01:18:00.462845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.524 [2024-07-16 01:18:00.465778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.524 [2024-07-16 01:18:00.474977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.524 [2024-07-16 01:18:00.475363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-16 01:18:00.475456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.524 [2024-07-16 01:18:00.475483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.524 [2024-07-16 01:18:00.475753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.524 [2024-07-16 01:18:00.475985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.524 [2024-07-16 01:18:00.476007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.524 [2024-07-16 01:18:00.476027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.524 [2024-07-16 01:18:00.479105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.524 [2024-07-16 01:18:00.488290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.524 [2024-07-16 01:18:00.488751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-16 01:18:00.488804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.524 [2024-07-16 01:18:00.488830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.524 [2024-07-16 01:18:00.489131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.524 [2024-07-16 01:18:00.489344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.524 [2024-07-16 01:18:00.489381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.524 [2024-07-16 01:18:00.489401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.524 [2024-07-16 01:18:00.492323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.524 [2024-07-16 01:18:00.501534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.524 [2024-07-16 01:18:00.501909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-16 01:18:00.501952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.524 [2024-07-16 01:18:00.501989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.524 [2024-07-16 01:18:00.502258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.524 [2024-07-16 01:18:00.502482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.524 [2024-07-16 01:18:00.502503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.524 [2024-07-16 01:18:00.502523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.524 [2024-07-16 01:18:00.505478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.524 [2024-07-16 01:18:00.515250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.782 [2024-07-16 01:18:00.515673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.782 [2024-07-16 01:18:00.515706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.782 [2024-07-16 01:18:00.515733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.782 [2024-07-16 01:18:00.516029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.782 [2024-07-16 01:18:00.516283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.782 [2024-07-16 01:18:00.516321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.782 [2024-07-16 01:18:00.516342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.782 [2024-07-16 01:18:00.519365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.782 [2024-07-16 01:18:00.528469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.782 [2024-07-16 01:18:00.528981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.782 [2024-07-16 01:18:00.529034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.782 [2024-07-16 01:18:00.529061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.782 [2024-07-16 01:18:00.529362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.782 [2024-07-16 01:18:00.529568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.782 [2024-07-16 01:18:00.529589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.782 [2024-07-16 01:18:00.529609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.782 [2024-07-16 01:18:00.532537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.782 [2024-07-16 01:18:00.541634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.782 [2024-07-16 01:18:00.542056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.782 [2024-07-16 01:18:00.542090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.782 [2024-07-16 01:18:00.542117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.782 [2024-07-16 01:18:00.542390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.782 [2024-07-16 01:18:00.542598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.782 [2024-07-16 01:18:00.542619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.782 [2024-07-16 01:18:00.542638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.782 [2024-07-16 01:18:00.545559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.782 [2024-07-16 01:18:00.554798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.782 [2024-07-16 01:18:00.555269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.782 [2024-07-16 01:18:00.555315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.782 [2024-07-16 01:18:00.555342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.782 [2024-07-16 01:18:00.555628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.782 [2024-07-16 01:18:00.555835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.782 [2024-07-16 01:18:00.555857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.782 [2024-07-16 01:18:00.555877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.782 [2024-07-16 01:18:00.558879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.782 [2024-07-16 01:18:00.567999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.782 [2024-07-16 01:18:00.568453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.782 [2024-07-16 01:18:00.568484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.568509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.568794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.569026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.569048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.569068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.572015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.581217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.581623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.581652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.581678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.581936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.582197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.582220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.582241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.585176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.594473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.594860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.594891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.594916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.595231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.595453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.595474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.595499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.598667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.607653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.608102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.608136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.608163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.608462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.608669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.608690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.608709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.611672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.620878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.621342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.621373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.621400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.621683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.621889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.621910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.621930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.624874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.634126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.634600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.634632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.634659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.634939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.635173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.635195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.635215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.638148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.647213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.647620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.647651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.647676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.647942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.648178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.648200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.648220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.651190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.660584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.660980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.661013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.661039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.661326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.661533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.661554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.661573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.664563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.673684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.674124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.674155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.674181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.674461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.674667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.674688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.674707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.677637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.686843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.687288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.687319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.687345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.687627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.687837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.783 [2024-07-16 01:18:00.687858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.783 [2024-07-16 01:18:00.687878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.783 [2024-07-16 01:18:00.690836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.783 [2024-07-16 01:18:00.700080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.783 [2024-07-16 01:18:00.700499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.783 [2024-07-16 01:18:00.700531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.783 [2024-07-16 01:18:00.700557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.783 [2024-07-16 01:18:00.700826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.783 [2024-07-16 01:18:00.701040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.784 [2024-07-16 01:18:00.701061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.784 [2024-07-16 01:18:00.701080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.784 [2024-07-16 01:18:00.704023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.784 [2024-07-16 01:18:00.713319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.784 [2024-07-16 01:18:00.713825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.784 [2024-07-16 01:18:00.713880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.784 [2024-07-16 01:18:00.713906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.784 [2024-07-16 01:18:00.714172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.784 [2024-07-16 01:18:00.714380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.784 [2024-07-16 01:18:00.714401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.784 [2024-07-16 01:18:00.714422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.784 [2024-07-16 01:18:00.717372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.784 [2024-07-16 01:18:00.726378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.784 [2024-07-16 01:18:00.726879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.784 [2024-07-16 01:18:00.726939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.784 [2024-07-16 01:18:00.726993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.784 [2024-07-16 01:18:00.727271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.784 [2024-07-16 01:18:00.727493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.784 [2024-07-16 01:18:00.727515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.784 [2024-07-16 01:18:00.727535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.784 [2024-07-16 01:18:00.730513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.784 [2024-07-16 01:18:00.739550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.784 [2024-07-16 01:18:00.739924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.784 [2024-07-16 01:18:00.739989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.784 [2024-07-16 01:18:00.740017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.784 [2024-07-16 01:18:00.740304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.784 [2024-07-16 01:18:00.740526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.784 [2024-07-16 01:18:00.740547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.784 [2024-07-16 01:18:00.740566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.784 [2024-07-16 01:18:00.743441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.784 [2024-07-16 01:18:00.752679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.784 [2024-07-16 01:18:00.753084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.784 [2024-07-16 01:18:00.753115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.784 [2024-07-16 01:18:00.753141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.784 [2024-07-16 01:18:00.753403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.784 [2024-07-16 01:18:00.753609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.784 [2024-07-16 01:18:00.753630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.784 [2024-07-16 01:18:00.753648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.784 [2024-07-16 01:18:00.756635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.784 [2024-07-16 01:18:00.765746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.784 [2024-07-16 01:18:00.766128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.784 [2024-07-16 01:18:00.766159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:44.784 [2024-07-16 01:18:00.766185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:44.784 [2024-07-16 01:18:00.766468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:44.784 [2024-07-16 01:18:00.766675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.784 [2024-07-16 01:18:00.766696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.784 [2024-07-16 01:18:00.766715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.784 [2024-07-16 01:18:00.769663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.778960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.779378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.779441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.779486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.779777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.780041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.780063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.780098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.783198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.792202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.792665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.792725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.792751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.793046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.793286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.793309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.793330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.796421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.805256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.805631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.805687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.805712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.805993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.806225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.806248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.806270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.809251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.818548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.819029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.819060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.819087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.819365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.819576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.819598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.819617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.822580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.831810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.832255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.832287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.832313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.832584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.832790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.832812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.832831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.835786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.845083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.845466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.845497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.845521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.845779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.846029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.846053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.846074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.849304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.858235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.858705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.858736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.858762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.859039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.859265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.859287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.859306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.862234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.871349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.871789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.871820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.871846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.872140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.872383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.872404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.872423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.875382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.884538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.884879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.884909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.884934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.885194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.885415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.885437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.885456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.888449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.897760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.898227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.898260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.898301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.898579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.898785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.043 [2024-07-16 01:18:00.898806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.043 [2024-07-16 01:18:00.898826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.043 [2024-07-16 01:18:00.901768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.043 [2024-07-16 01:18:00.911044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.043 [2024-07-16 01:18:00.911439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.043 [2024-07-16 01:18:00.911496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.043 [2024-07-16 01:18:00.911528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.043 [2024-07-16 01:18:00.911806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.043 [2024-07-16 01:18:00.912043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:00.912067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:00.912088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:00.915000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:00.924193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:00.924579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:00.924639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:00.924665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:00.924940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:00.925175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:00.925198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:00.925219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:00.928137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:00.937368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:00.937743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:00.937774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:00.937801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:00.938095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:00.938321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:00.938342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:00.938361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:00.941164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:00.950433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:00.950869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:00.950900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:00.950927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:00.951213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:00.951450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:00.951476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:00.951496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:00.954417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:00.963578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:00.963940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:00.964004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:00.964029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:00.964312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:00.964518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:00.964539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:00.964558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:00.967408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:00.976627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:00.977073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:00.977104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:00.977131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:00.977411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:00.977617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:00.977638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:00.977657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:00.980602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:00.990075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:00.990885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:00.990914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:00.990952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:00.991261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:00.991481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:00.991503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:00.991522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:00.994634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:01.003497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:01.003940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:01.003994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:01.004021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:01.004293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:01.004513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:01.004534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:01.004553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:01.007643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:01.016714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:01.017106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:01.017138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:01.017165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:01.017438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:01.017645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:01.017667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:01.017686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:01.020733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.044 [2024-07-16 01:18:01.029863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.044 [2024-07-16 01:18:01.030356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.044 [2024-07-16 01:18:01.030386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.044 [2024-07-16 01:18:01.030411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.044 [2024-07-16 01:18:01.030666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.044 [2024-07-16 01:18:01.030872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.044 [2024-07-16 01:18:01.030893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.044 [2024-07-16 01:18:01.030913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.044 [2024-07-16 01:18:01.034232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-07-16 01:18:01.043321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-07-16 01:18:01.043693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.302 [2024-07-16 01:18:01.043723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.302 [2024-07-16 01:18:01.043748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.302 [2024-07-16 01:18:01.044041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.302 [2024-07-16 01:18:01.044281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.302 [2024-07-16 01:18:01.044317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.302 [2024-07-16 01:18:01.044338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.302 [2024-07-16 01:18:01.047212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-07-16 01:18:01.056612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-07-16 01:18:01.057113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.302 [2024-07-16 01:18:01.057146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.302 [2024-07-16 01:18:01.057173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.302 [2024-07-16 01:18:01.057464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.302 [2024-07-16 01:18:01.057671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.302 [2024-07-16 01:18:01.057692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.302 [2024-07-16 01:18:01.057711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.302 [2024-07-16 01:18:01.060716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-07-16 01:18:01.069862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-07-16 01:18:01.070229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.302 [2024-07-16 01:18:01.070281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.302 [2024-07-16 01:18:01.070306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.302 [2024-07-16 01:18:01.070566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.302 [2024-07-16 01:18:01.070775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.302 [2024-07-16 01:18:01.070796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.302 [2024-07-16 01:18:01.070815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.302 [2024-07-16 01:18:01.073768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-07-16 01:18:01.083036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-07-16 01:18:01.083431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.302 [2024-07-16 01:18:01.083492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.302 [2024-07-16 01:18:01.083517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.302 [2024-07-16 01:18:01.083792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.302 [2024-07-16 01:18:01.084024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.302 [2024-07-16 01:18:01.084061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.302 [2024-07-16 01:18:01.084088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.302 [2024-07-16 01:18:01.087020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-07-16 01:18:01.096188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-07-16 01:18:01.096648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.302 [2024-07-16 01:18:01.096706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.302 [2024-07-16 01:18:01.096731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.302 [2024-07-16 01:18:01.097028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.302 [2024-07-16 01:18:01.097293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.302 [2024-07-16 01:18:01.097317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.302 [2024-07-16 01:18:01.097354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.302 [2024-07-16 01:18:01.100728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-07-16 01:18:01.109566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-07-16 01:18:01.109971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.302 [2024-07-16 01:18:01.110019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.302 [2024-07-16 01:18:01.110046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.302 [2024-07-16 01:18:01.110336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.302 [2024-07-16 01:18:01.110548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.302 [2024-07-16 01:18:01.110571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.302 [2024-07-16 01:18:01.110591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.302 [2024-07-16 01:18:01.113668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-07-16 01:18:01.122868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-07-16 01:18:01.123250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.302 [2024-07-16 01:18:01.123297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.302 [2024-07-16 01:18:01.123323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.123606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.123819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.123841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.123860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.126910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.136205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.136728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.136785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.136812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.137117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.137342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.137378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.137398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.140442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.149567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.149926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.149978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.150007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.150313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.150534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.150556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.150575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.153600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.163017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.163424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.163454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.163481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.163757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.163992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.164017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.164040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.167062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.176449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.176826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.176857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.176884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.177197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.177446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.177467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.177487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.180485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.189725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.190104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.190157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.190184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.190462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.190668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.190689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.190709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.193647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.202851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.203250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.203279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.203304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.203565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.203771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.203793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.203812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.206770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.215986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.216394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.216425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.216450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.216714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.216920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.216963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.216987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.219801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.229224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.229588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.229618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.229644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.229909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.230143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.230165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.230185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.233136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.242575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.242949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.242986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.243012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.243291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.243498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.243519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.243539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.246498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.255736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.256149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.256180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.256205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.303 [2024-07-16 01:18:01.256465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.303 [2024-07-16 01:18:01.256671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.303 [2024-07-16 01:18:01.256693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.303 [2024-07-16 01:18:01.256713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.303 [2024-07-16 01:18:01.259669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.303 [2024-07-16 01:18:01.268880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.303 [2024-07-16 01:18:01.269309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.303 [2024-07-16 01:18:01.269339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.303 [2024-07-16 01:18:01.269369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.304 [2024-07-16 01:18:01.269640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.304 [2024-07-16 01:18:01.269847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.304 [2024-07-16 01:18:01.269868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.304 [2024-07-16 01:18:01.269887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.304 [2024-07-16 01:18:01.272865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.304 [2024-07-16 01:18:01.282138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.304 [2024-07-16 01:18:01.282530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.304 [2024-07-16 01:18:01.282561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.304 [2024-07-16 01:18:01.282588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.304 [2024-07-16 01:18:01.282870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.304 [2024-07-16 01:18:01.283127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.304 [2024-07-16 01:18:01.283151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.304 [2024-07-16 01:18:01.283172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.304 [2024-07-16 01:18:01.286100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.563 [2024-07-16 01:18:01.295709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.563 [2024-07-16 01:18:01.296153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.563 [2024-07-16 01:18:01.296186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.563 [2024-07-16 01:18:01.296213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.563 [2024-07-16 01:18:01.296507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.563 [2024-07-16 01:18:01.296728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.563 [2024-07-16 01:18:01.296749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.563 [2024-07-16 01:18:01.296768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.563 [2024-07-16 01:18:01.299928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.563 [2024-07-16 01:18:01.308905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.563 [2024-07-16 01:18:01.309303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.563 [2024-07-16 01:18:01.309396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.563 [2024-07-16 01:18:01.309422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.563 [2024-07-16 01:18:01.309692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.563 [2024-07-16 01:18:01.309899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.563 [2024-07-16 01:18:01.309924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.563 [2024-07-16 01:18:01.309978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.563 [2024-07-16 01:18:01.312924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 56647 Killed "${NVMF_APP[@]}" "$@" 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=57688 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 57688 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 57688 ']' 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.563 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.563 [2024-07-16 01:18:01.322435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.563 [2024-07-16 01:18:01.322821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.563 [2024-07-16 01:18:01.322852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.563 [2024-07-16 01:18:01.322879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.563 [2024-07-16 01:18:01.323180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.563 [2024-07-16 01:18:01.323413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.563 [2024-07-16 01:18:01.323435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.563 [2024-07-16 01:18:01.323455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.563 [2024-07-16 01:18:01.326552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.563 [2024-07-16 01:18:01.335793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.563 [2024-07-16 01:18:01.336237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.563 [2024-07-16 01:18:01.336294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.563 [2024-07-16 01:18:01.336321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.563 [2024-07-16 01:18:01.336585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.563 [2024-07-16 01:18:01.336797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.336823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.336843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.339929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.349201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.349665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.349697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.349727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.350025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.350298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.350336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.350357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.353727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.362601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.362997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.363030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.363058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.363352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.363580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.363602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.363623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.365079] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:24:45.564 [2024-07-16 01:18:01.365138] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.564 [2024-07-16 01:18:01.366863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.376025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.376465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.376498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.376526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.376819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.377063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.377087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.377116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.380170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.389581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.390010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.390041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.390067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.390338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.390550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.390572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.390592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.393630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.564 [2024-07-16 01:18:01.402813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.403296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.403327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.403354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.403635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.403847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.403869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.403888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.407128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.416224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.416585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.416616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.416641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.416913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.417163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.417187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.417208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.420321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.429631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.430031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.430064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.430091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.430374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.430592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.430615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.430636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.431709] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:45.564 [2024-07-16 01:18:01.433792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.443121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.443730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.443772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.443805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.444088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.444345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.444368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.444393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.447525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.456602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.456979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.457025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.457052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.457350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.457570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.457592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.457613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.460740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.469944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.564 [2024-07-16 01:18:01.470341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.564 [2024-07-16 01:18:01.470372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.564 [2024-07-16 01:18:01.470398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.564 [2024-07-16 01:18:01.470672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.564 [2024-07-16 01:18:01.470885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.564 [2024-07-16 01:18:01.470907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.564 [2024-07-16 01:18:01.470927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.564 [2024-07-16 01:18:01.474017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.564 [2024-07-16 01:18:01.483406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.565 [2024-07-16 01:18:01.483782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.565 [2024-07-16 01:18:01.483814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.565 [2024-07-16 01:18:01.483841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.565 [2024-07-16 01:18:01.484119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.565 [2024-07-16 01:18:01.484382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.565 [2024-07-16 01:18:01.484405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.565 [2024-07-16 01:18:01.484425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.565 [2024-07-16 01:18:01.487653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.565 [2024-07-16 01:18:01.496916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.565 [2024-07-16 01:18:01.497604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.565 [2024-07-16 01:18:01.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.565 [2024-07-16 01:18:01.497683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.565 [2024-07-16 01:18:01.498005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.565 [2024-07-16 01:18:01.498243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.565 [2024-07-16 01:18:01.498282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.565 [2024-07-16 01:18:01.498307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.565 [2024-07-16 01:18:01.501502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.565 [2024-07-16 01:18:01.510469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.565 [2024-07-16 01:18:01.510883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.565 [2024-07-16 01:18:01.510916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.565 [2024-07-16 01:18:01.510952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.565 [2024-07-16 01:18:01.511267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.565 [2024-07-16 01:18:01.511487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.565 [2024-07-16 01:18:01.511509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.565 [2024-07-16 01:18:01.511553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.565 [2024-07-16 01:18:01.514665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.565 [2024-07-16 01:18:01.523895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.565 [2024-07-16 01:18:01.524301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.565 [2024-07-16 01:18:01.524333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.565 [2024-07-16 01:18:01.524360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.565 [2024-07-16 01:18:01.524630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.565 [2024-07-16 01:18:01.524843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.565 [2024-07-16 01:18:01.524865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.565 [2024-07-16 01:18:01.524884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.565 [2024-07-16 01:18:01.528065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.565 [2024-07-16 01:18:01.537289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.565 [2024-07-16 01:18:01.537709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.565 [2024-07-16 01:18:01.537740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.565 [2024-07-16 01:18:01.537766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.565 [2024-07-16 01:18:01.538045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.565 [2024-07-16 01:18:01.538345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.565 [2024-07-16 01:18:01.538368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.565 [2024-07-16 01:18:01.538403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.565 [2024-07-16 01:18:01.541494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.565 [2024-07-16 01:18:01.543168] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.565 [2024-07-16 01:18:01.543199] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.565 [2024-07-16 01:18:01.543213] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.565 [2024-07-16 01:18:01.543240] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.565 [2024-07-16 01:18:01.543250] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.565 [2024-07-16 01:18:01.543313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.565 [2024-07-16 01:18:01.543368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.565 [2024-07-16 01:18:01.543371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.565 [2024-07-16 01:18:01.550838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.565 [2024-07-16 01:18:01.551436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.565 [2024-07-16 01:18:01.551477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.565 [2024-07-16 01:18:01.551509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.565 [2024-07-16 01:18:01.551781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.565 [2024-07-16 01:18:01.552071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.565 [2024-07-16 01:18:01.552096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.565 [2024-07-16 01:18:01.552124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.565 [2024-07-16 01:18:01.555556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.564469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.564984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.565028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.565061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.565356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.565587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.565611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.565636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.568886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.578212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.578896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.578952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.578998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.579280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.579526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.579551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.579575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.582809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.591928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.592536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.592580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.592614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.592892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.593165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.593191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.593241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.596527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.605722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.606319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.606363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.606395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.606686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.606967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.606993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.607035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.610635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.619251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.619795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.619847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.619880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.620189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.620463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.620487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.620513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.623708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.632770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.633156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.633189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.633216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.633506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.633732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.633756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.633778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.637027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.646492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.646868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.646900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.646927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.647205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.647460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.647484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.647506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.650796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.825 [2024-07-16 01:18:01.660049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.660525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.660567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.660595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 [2024-07-16 01:18:01.660892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.661204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.661237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.661276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.664602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.673558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.825 [2024-07-16 01:18:01.673928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.825 [2024-07-16 01:18:01.673979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.825 [2024-07-16 01:18:01.673997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.825 [2024-07-16 01:18:01.674213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.825 [2024-07-16 01:18:01.674442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.825 [2024-07-16 01:18:01.674464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.825 [2024-07-16 01:18:01.674483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.825 [2024-07-16 01:18:01.677737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.825 [2024-07-16 01:18:01.678540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.825 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.825 [2024-07-16 01:18:01.687244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.825 [2024-07-16 01:18:01.687581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.826 [2024-07-16 01:18:01.687609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.826 [2024-07-16 01:18:01.687626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.826 [2024-07-16 01:18:01.687841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.826 [2024-07-16 01:18:01.688100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.826 [2024-07-16 01:18:01.688123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.826 [2024-07-16 01:18:01.688137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.826 [2024-07-16 01:18:01.691376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.826 [2024-07-16 01:18:01.700749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.826 [2024-07-16 01:18:01.701120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.826 [2024-07-16 01:18:01.701149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.826 [2024-07-16 01:18:01.701166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.826 [2024-07-16 01:18:01.701396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.826 [2024-07-16 01:18:01.701619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.826 [2024-07-16 01:18:01.701640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.826 [2024-07-16 01:18:01.701653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.826 [2024-07-16 01:18:01.704815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.826 [2024-07-16 01:18:01.714294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.826 [2024-07-16 01:18:01.714700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.826 [2024-07-16 01:18:01.714740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.826 [2024-07-16 01:18:01.714758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.826 [2024-07-16 01:18:01.715000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.826 [2024-07-16 01:18:01.715215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.826 [2024-07-16 01:18:01.715236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.826 [2024-07-16 01:18:01.715269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.826 [2024-07-16 01:18:01.718480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.826 Malloc0 00:24:45.826 [2024-07-16 01:18:01.727800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.826 [2024-07-16 01:18:01.728276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.826 [2024-07-16 01:18:01.728311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.826 [2024-07-16 01:18:01.728331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.826 [2024-07-16 01:18:01.728553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.826 [2024-07-16 01:18:01.728777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.826 [2024-07-16 01:18:01.728801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.826 [2024-07-16 01:18:01.728826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.826 [2024-07-16 01:18:01.732188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.826 [2024-07-16 01:18:01.741495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.826 [2024-07-16 01:18:01.741851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.826 [2024-07-16 01:18:01.741883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9fe0 with addr=10.0.0.2, port=4420 00:24:45.826 [2024-07-16 01:18:01.741910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9fe0 is same with the state(5) to be set 00:24:45.826 [2024-07-16 01:18:01.742174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9fe0 (9): Bad file descriptor 00:24:45.826 [2024-07-16 01:18:01.742446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.826 [2024-07-16 01:18:01.742470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.826 [2024-07-16 01:18:01.742491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.826 [2024-07-16 01:18:01.745835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.826 [2024-07-16 01:18:01.747543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.826 01:18:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 56935 00:24:45.826 [2024-07-16 01:18:01.755108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.826 [2024-07-16 01:18:01.783235] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.793 00:24:55.793 Latency(us) 00:24:55.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.793 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:55.793 Verification LBA range: start 0x0 length 0x4000 00:24:55.793 Nvme1n1 : 15.01 6370.06 24.88 10036.26 0.00 7778.26 2560.76 24660.95 00:24:55.793 =================================================================================================================== 00:24:55.793 Total : 6370.06 24.88 10036.26 0.00 7778.26 2560.76 24660.95 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:55.793 rmmod nvme_tcp 00:24:55.793 rmmod nvme_fabrics 00:24:55.793 rmmod nvme_keyring 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 57688 ']' 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 57688 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 57688 ']' 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 57688 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 57688 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57688' 00:24:55.793 killing process with pid 57688 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 57688 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 57688 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.793 01:18:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.694 01:18:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.694 00:24:57.694 real 0m22.544s 00:24:57.694 user 0m58.466s 00:24:57.694 sys 0m4.946s 00:24:57.694 01:18:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.694 01:18:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:57.694 ************************************ 00:24:57.694 END TEST nvmf_bdevperf 00:24:57.694 ************************************ 00:24:57.694 01:18:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:57.694 01:18:13 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:57.694 01:18:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:57.694 01:18:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.694 01:18:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.694 ************************************ 00:24:57.694 START TEST nvmf_target_disconnect 00:24:57.694 ************************************ 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:57.694 * Looking for test storage... 00:24:57.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:57.694 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:57.695 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.695 01:18:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.695 01:18:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.695 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.695 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.695 01:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.695 01:18:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:00.249 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:00.249 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:00.249 Found net devices under 0000:09:00.0: cvl_0_0 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:00.249 Found net devices under 0000:09:00.1: cvl_0_1 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:25:00.249 00:25:00.249 --- 10.0.0.2 ping statistics --- 00:25:00.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.249 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:25:00.249 00:25:00.249 --- 10.0.0.1 ping statistics --- 00:25:00.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.249 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:25:00.249 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:00.250 ************************************ 00:25:00.250 START TEST nvmf_target_disconnect_tc1 00:25:00.250 ************************************ 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.250 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.250 [2024-07-16 01:18:15.915216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.250 [2024-07-16 01:18:15.915289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461340 with addr=10.0.0.2, port=4420 00:25:00.250 [2024-07-16 01:18:15.915352] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:00.250 [2024-07-16 01:18:15.915395] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:00.250 [2024-07-16 01:18:15.915414] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:00.250 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:00.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:00.250 Initializing NVMe Controllers 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:00.250 00:25:00.250 real 0m0.095s 00:25:00.250 user 0m0.033s 00:25:00.250 sys 0m0.062s 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:00.250 ************************************ 00:25:00.250 END TEST nvmf_target_disconnect_tc1 00:25:00.250 ************************************ 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:00.250 ************************************ 00:25:00.250 START TEST nvmf_target_disconnect_tc2 00:25:00.250 ************************************ 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=61372 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 61372 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 61372 ']' 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.250 01:18:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.250 [2024-07-16 01:18:16.030143] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:25:00.250 [2024-07-16 01:18:16.030226] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.250 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.250 [2024-07-16 01:18:16.096053] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.250 [2024-07-16 01:18:16.211422] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.250 [2024-07-16 01:18:16.211477] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.250 [2024-07-16 01:18:16.211507] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.250 [2024-07-16 01:18:16.211518] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.250 [2024-07-16 01:18:16.211528] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.250 [2024-07-16 01:18:16.211613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:00.250 [2024-07-16 01:18:16.211676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:00.250 [2024-07-16 01:18:16.211707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:00.250 [2024-07-16 01:18:16.211709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.508 Malloc0 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.508 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.508 [2024-07-16 01:18:16.414304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.509 [2024-07-16 01:18:16.442549] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=61447 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:00.509 01:18:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.767 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.682 01:18:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 61372 00:25:02.682 01:18:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Write completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.682 Read completed with error (sct=0, sc=8) 00:25:02.682 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 [2024-07-16 01:18:18.467730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 [2024-07-16 01:18:18.468129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 [2024-07-16 01:18:18.468567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Write completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.683 Read completed with error (sct=0, sc=8) 00:25:02.683 starting I/O failed 00:25:02.684 Write completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Write completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Write completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Write completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Write completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Read completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 Write completed with error (sct=0, sc=8) 00:25:02.684 starting I/O failed 00:25:02.684 [2024-07-16 01:18:18.468859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:02.684 [2024-07-16 01:18:18.469043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.469077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.469193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.469220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.469361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.469387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.469517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.469544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.469678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.469704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.469808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.469834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.469996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.470023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.470119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.470146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.470284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.470310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.470439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.470466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.470615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.470641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.470772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.470799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.470950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.471112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.471248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.471407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.471541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.471666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.471793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.471944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.471977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.472126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.472152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.472260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.472285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.472398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.472425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.472949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.472981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-16 01:18:18.473110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-16 01:18:18.473135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.473246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.473272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.473437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.473462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.473592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.473617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.473708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.473734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.473859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.473884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.474023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.474049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.474152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.474177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.474330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.474356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.474505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.474529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.474653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.474680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.474780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.474805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.474950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.474979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.475083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.475108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.475232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.475257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.475353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.475377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.475477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.475503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.475601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.475627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.475745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.475784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.475951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.476000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.476110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.476138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.476261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.476287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.476414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.476440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.476546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.476572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.476688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.476714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.476844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.476874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.476994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.477023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.477125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.477153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.477301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.477336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-16 01:18:18.477461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-16 01:18:18.477488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.477625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.477651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.477797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.477823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.477927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.477962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.478072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.478099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.478197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.478224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.478355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.478381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.478536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.478562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.478686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.478713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.478868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.478894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.479013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.479042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.479172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.479197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.479328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.479354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.479485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.479511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.479610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.479635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.479790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.479816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.479911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.479936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.480063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.480089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.480185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.480210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.480311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.480336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.480429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.480454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.480603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.480629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.480728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.480756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.480910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.480936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.481050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.481077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.481202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.481229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.481352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.481379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.481503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.481529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.481660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.481687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.481826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.481865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.481999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-16 01:18:18.482029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-16 01:18:18.482130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.482156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.482251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.482278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.482433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.482459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.482580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.482606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.482704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.482730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.482853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.482878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.482981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.483106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.483231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.483389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.483509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.483629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.483777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.483925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.483951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.484087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.484113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.484220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.484246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.484342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.484369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.484521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.484548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.484694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.484721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.484882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.484909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.485068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.485095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.485245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.485272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.485403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.485430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.485527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.485554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.485679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.485705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.485810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.485836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.485965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.485992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.486128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.486156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.486317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.486342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.486467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.486492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.486621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.486647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-16 01:18:18.486767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-16 01:18:18.486793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.486915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.486941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.487073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.487101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.487226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.487253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.487359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.487386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.487472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.487498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.487649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.487675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.487800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.487827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.487931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.487968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.488072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.488097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.488196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.488221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.488314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.488338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.488463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.488488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.488608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.488633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.488756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.488781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.488880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.488905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.489009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.489036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.489133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.489164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.489291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.489317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.489438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.489462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-16 01:18:18.489613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-16 01:18:18.489638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.489736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.489761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.489885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.489910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.490042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.490071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.490195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.490221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.490313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.490340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.490465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.490491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.490591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.490617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.490717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.490744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.490831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.490857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.491005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.491032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.491132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.491160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.491311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.491338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.491461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.491487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.491638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.491664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.491765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.491792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.491889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.491914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.492048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.492074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.492225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.492250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.492376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.492401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.492500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.492525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.492650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.492678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.492817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.492856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.492968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.492996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.493123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.493155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.493332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.493376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.493524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.493551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.493672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.493700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.493828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.493856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.493977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.494003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.494104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.494131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-16 01:18:18.494285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-16 01:18:18.494311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.494433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.494459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.494603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.494647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.494796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.494822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.494943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.494975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.495096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.495123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.495244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.495269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.495422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.495448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.495541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.495567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.495669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.495696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.495800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.495825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.495926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.495952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.496086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.496111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.496213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.496238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.496362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.496387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.496509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.496534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.496626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.496650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.496757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.496797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.496930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.496967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.497069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.497096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.497223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.497249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.497367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.497393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.497539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.497565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.497689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.497716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.497856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.497895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.498029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.498058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.498190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.498218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.498412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.498439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.498565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.498591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.498706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.498735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.498883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.498910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.499040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.499067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-16 01:18:18.499216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-16 01:18:18.499243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.499369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.499401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.499545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.499571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.499669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.499695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.499807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.499846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.499976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.500003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.500134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.500163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.500263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.500291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.500414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.500458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.500648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.500675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.500801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.500827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.500922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.500948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.501106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.501132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.501260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.501285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.501407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.501433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.501547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.501574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.501671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.501697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.501816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.501842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.501944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.501976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.502133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.502161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.502284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.502311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.502414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.502441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.502589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.502615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.502708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.502734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.502874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.502913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.503050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.503078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.503199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.503224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.503388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.503414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.503568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.503594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.503716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.503741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.503871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.503896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.503997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.504024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-16 01:18:18.504116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-16 01:18:18.504142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.504264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.504290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.504420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.504446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.504594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.504619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.504723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.504748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.504876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.504905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.505011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.505038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.505189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.505215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.505336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.505362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.505457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.505488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.505638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.505663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.505761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.505787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.505915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.505941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.506068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.506094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.506258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.506284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.506382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.506408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.506531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.506557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.506680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.506706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.506794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.506819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.506909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.506935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.507091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.507119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.507247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.507272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.507397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.507422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.507571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.507596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.507719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.507746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.507840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.507866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.507998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.508025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.508123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.508149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.508271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.508297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.508397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.508422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.508539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-16 01:18:18.508565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-16 01:18:18.508688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.508716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.508867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.508893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.508999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.509026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.509185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.509210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.509332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.509358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.509484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.509511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.509607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.509633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.509757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.509783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.509931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.509962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.510086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.510111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.510233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.510261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.510414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.510460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.510590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.510615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.510739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.510764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.510886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.510910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.511041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.511066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.511165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.511191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.511283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.511309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.511458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.511511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.511641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.511668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.511766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.511791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.511916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.511942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.512086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.512113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.512241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.512266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.512361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.512385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.512529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.512576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.512697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.512724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.512852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.512878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.512976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.513004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.513102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.513129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-16 01:18:18.513233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-16 01:18:18.513259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.513410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.513436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.513564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.513590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.513697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.513723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.513825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.513852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.513980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.514007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.514150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.514195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.514336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.514381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.514536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.514584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.514700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.514725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.514849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.514875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.514971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.514997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.515102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.515128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.515276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.515301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.515427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.515452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.515582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.515608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.515755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.515781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.515901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.515927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.516066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.516091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.516185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.516211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.516329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.516354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.516477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.516501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.516628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-16 01:18:18.516653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-16 01:18:18.516779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.516804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.516954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.516985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.517112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.517136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.517234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.517259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.517408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.517434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.517540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.517570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.517669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.517694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.517817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.517843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.517936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.517968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.518065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.518090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.518237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.518263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.518386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.518411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.518512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.518536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.518666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.518691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.518819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.518844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.518934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.518967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.519096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.519121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.519245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.519271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.519394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.519418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.519572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.519597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.519721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.519746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.519869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.519894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.519995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.520021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.520147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.520172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.520271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.520297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.520429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.520454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.520582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.520607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.520757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.520782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.520928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.520953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.521082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.521107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.521229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-16 01:18:18.521254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-16 01:18:18.521409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.521456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.521590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.521616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.521739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.521764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.521859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.521885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.522011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.522037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.522186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.522211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.522386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.522434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.522564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.522589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.522712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.522737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.522887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.522912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.523017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.523042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.523165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.523191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.523311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.523359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.523486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.523511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.523638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.523667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.523775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.523800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.523920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.523945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.524054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.524079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.524203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.524229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.524409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.524469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.524564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.524588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.524717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.524742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.524867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.524892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.525037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.525063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.525163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.525188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.525344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.525390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.525480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.525505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.525629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.525653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-16 01:18:18.525761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-16 01:18:18.525787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.525913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.525937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.526062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.526087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.526205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.526230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.526364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.526389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.526513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.526539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.526688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.526712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.526836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.526861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.526964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.526989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.527101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.527139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.527250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.527277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.527399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.527425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.527518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.527544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.527668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.527694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.527818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.527845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.527971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.527998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.528121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.528147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.528272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.528298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.528424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.528450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.528603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.528629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.528715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.528741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.528849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.528876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.529032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.529085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.529254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.529304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.529403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.529428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.529633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.529658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.529785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.529816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.529934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.529965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.530062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.530087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.530213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.530238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.530396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.530421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-16 01:18:18.530567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-16 01:18:18.530593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.530701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.530726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.530878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.530903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.531067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.531093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.531189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.531215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.531316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.531340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.531458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.531484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.531607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.531632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.531737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.531763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.531913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.531939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.532102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.532127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.532248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.532274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.532417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.532442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.532540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.532565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.532683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.532708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.532839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.532864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.532987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.533117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.533243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.533358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.533503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.533632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.533750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.533920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.533944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.534052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.534078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.534197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.534222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.534354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.534378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.534526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.534552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.534675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.534700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.534818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.534844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-16 01:18:18.534996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-16 01:18:18.535022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.535168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.535192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.535339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.535365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.535489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.535514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.535643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.535669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.535797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.535827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.535979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.536015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.536114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.536139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.536236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.536261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.536411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.536437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.536539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.536564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.536659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.536683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.536833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.536859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.536989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.537015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.537143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.537169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.537292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.537317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.537443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.537467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.537592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.537617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.537711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.537737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.537838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.537863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.537979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.538004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.538165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.538191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.538338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.538363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.538484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.538509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.538659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.538685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.538837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.538863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.538990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.539015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.539163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.539189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.539318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.539342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.539448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-16 01:18:18.539473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-16 01:18:18.539595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.539621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.539715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.539740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.539882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.539922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.540081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.540135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.540397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.540450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.540752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.540804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.541065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.541092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.541222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.541249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.541430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.541456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.541554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.541582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.541744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.541799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.542038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.542077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.542268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.542295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.542412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.542437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.542613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.542670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.542789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.542821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.542938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.542970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.543124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.543149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.543267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.543293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.543440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.543464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.543586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.543611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.543737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.543762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.543871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.543897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.543999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.544057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.544276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-16 01:18:18.544326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-16 01:18:18.544577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.544626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.544798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.544824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.544952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.544983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.545145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.545194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.545485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.545511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.545636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.545663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.545764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.545790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.545892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.545920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.546058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.546126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.546281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.546332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.546448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.546508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.546645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.546671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.546818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.546842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.546939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.546971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.547133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.547184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.547366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.547416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.547556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.547609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.547825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.547875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.548064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.548114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.548336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.548362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.548506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.548532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.548702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.548728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.548856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.548882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.548979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.549005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.549137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.549164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.549319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.549370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.549533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.549581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.549806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.549854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.550009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.550037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-16 01:18:18.550188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-16 01:18:18.550213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.550394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.550423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.550515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.550540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.550659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.550684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.550814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.550840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.550936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.550966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.551091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.551116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.551238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.551263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.551350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.551374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.551467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.551492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.551594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.551618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.551740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.551766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.551892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.551918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.552038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.552064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.552191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.552216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.552352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.552377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.552524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.552550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.552673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.552701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.552823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.552848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.552946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.552978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.553103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.553129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.553222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.553248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.553350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.553376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.553509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.553536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.553638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.553663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.553783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.553808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.553907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.553932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.554090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.554116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.554256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.554295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.554402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.554431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.554584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-16 01:18:18.554611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-16 01:18:18.554711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.554738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.554887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.554913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.555034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.555061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.555216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.555269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.555481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.555532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.555757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.555784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.555933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.555966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.556094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.556120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.556240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.556290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.556524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.556574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.556812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.556875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.557081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.557108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.557198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.557224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.557371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.557397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.557639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.557705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.557938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.557969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.558176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.558202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.558434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.558483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.558751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.558800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.559040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.559067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.559193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.559219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.559316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.559342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.559493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.559519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.559618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.559644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.559746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.559773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.559876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.559902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.560051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.560078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.560181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.560209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.560358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.560384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.560509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.560535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.560661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-16 01:18:18.560687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-16 01:18:18.560810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.560836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.560935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.560968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.561075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.561102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.561236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.561262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.561464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.561491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.561587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.561613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.561755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.561794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.562004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.562032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.562131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.562157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.562305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.562332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.562493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.562542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.562781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.562807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.562933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.562970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.563142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.563185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.563390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.563416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.563542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.563569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.563667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.563693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.563791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.563817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.563935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.563983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.564125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.564160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.564349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.564397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.564576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.564625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.564872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.564921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.565160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.565187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.565307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.565333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.565432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.565460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.565566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.565592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.565820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.565871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.566083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.566110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.566209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-16 01:18:18.566235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-16 01:18:18.566423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.566470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.566673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.566722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.566941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.566973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.567137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.567189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.567353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.567402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.567650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.567697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.567909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.567966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.568194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.568244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.568532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.568583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.568830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.568898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.569182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.569250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.569520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.569578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.569866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.569941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.570243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.570300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.570539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.570595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.570842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.570896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.571148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.571208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.571448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.571504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.571818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.571885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.572109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.572163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.572397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.572423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.572581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.572607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.572823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.572889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.573131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.573173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-16 01:18:18.573332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-16 01:18:18.573358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.573481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.573507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.573754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.573806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.574039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.574066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.574194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.574220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.574342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.574368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.574498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.574524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.574671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.574697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.574845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.574871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.574994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.575020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.575202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.575228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.575371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.575396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.575492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.575519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.575621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.575648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.575840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.575866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.576001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.576027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.576149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.576176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.576358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.576384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.576510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.576751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.576797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.577023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.577070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.577239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.577289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.577532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.577580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.577823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.577872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.578097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.578123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.578281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.578307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.578506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.578555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-16 01:18:18.578813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-16 01:18:18.578875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.579132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.579183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.579372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.579423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.579642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.579690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.579932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.579985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.580112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.580142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.580334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.580383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.580572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.580612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.580766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.580792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.581042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.581093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.581332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.581380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.581623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.581672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.581916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.581978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.582212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.582262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.582508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.582534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.582659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.582685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.582811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.582838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.582975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.583002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.583151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.583177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.583383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.583410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.583515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.583541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.583638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.583664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.583760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.583787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.583938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.583970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.584094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.584120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.584292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.584343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.584549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.584598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.584844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.584893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.585116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.585167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.585426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.585452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.707 qpair failed and we were unable to recover it. 00:25:02.707 [2024-07-16 01:18:18.585572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.707 [2024-07-16 01:18:18.585597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.585787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.585814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.585965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.585992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.586117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.586143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.586373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.586422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.586588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.586638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.586849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.586898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.587125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.587174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.587380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.587429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.587649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.587675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.587820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.587845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.588062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.588112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.588299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.588348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.588548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.588596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.588812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.588861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.589098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.589129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.589281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.589307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.589507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.589555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.589748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.589774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.589891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.589917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.590027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.590053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.590173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.590222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.590469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.590519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.590794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.590845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.591061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.591113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.591329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.591355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.591479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.591506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.591610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.591636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.591763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.591790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.591963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.591991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.592097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.592123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.708 [2024-07-16 01:18:18.592292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.708 [2024-07-16 01:18:18.592318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.708 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.592417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.592443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.592580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.592606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.592821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.592870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.593116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.593165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.593355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.593403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.593641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.593688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.593931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.593990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.594192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.594241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.594424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.594475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.594720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.594769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.595032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.595081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.595329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.595378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.595594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.595643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.595859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.595885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.596011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.596038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.596224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.596251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.596411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.596437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.596655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.596706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.596950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.597013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.597225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.597276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.597474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.597525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.597773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.597799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.597927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.597959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.598216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.598281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.598508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.598559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.598821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.598879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.599131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.599182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.599393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.599441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.599632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.599683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.599921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.600008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.600258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.709 [2024-07-16 01:18:18.600306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.709 qpair failed and we were unable to recover it. 00:25:02.709 [2024-07-16 01:18:18.600557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.600583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.600775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.600816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.601032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.601072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.601210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.601237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.601461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.601511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.601722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.601771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.602033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.602084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.602333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.602382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.602623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.602671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.602888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.602915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.603041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.603068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.603215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.603266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.603477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.603526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.603766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.603821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.604058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.604107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.604293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.604341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.604539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.604566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.604697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.604724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.604896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.604945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.605229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.605254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.605365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.605391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.605517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.605544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.605665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.605690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.605842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.605868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.606110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.606185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.606489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.606562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.606802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.606828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.606967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.606994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.607118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.710 [2024-07-16 01:18:18.607145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.710 qpair failed and we were unable to recover it. 00:25:02.710 [2024-07-16 01:18:18.607267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.607293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.607388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.607414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.607534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.607560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.607661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.607691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.607813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.607874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.608092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.608146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.608420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.608469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.608713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.608739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.608959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.608985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.609149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.609174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.609337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.609387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.609634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.609683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.609872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.609927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.610181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.610229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.610405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.610456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.610658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.610683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.610794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.610820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.611021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.611071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.611264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.611304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.611455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.611481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.611729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.611769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.611910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.611936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.612163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.612216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.612473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.612524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.612734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.612782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.711 qpair failed and we were unable to recover it. 00:25:02.711 [2024-07-16 01:18:18.613001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.711 [2024-07-16 01:18:18.613028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.613222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.613270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.613459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.613499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.613637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.613663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.613867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.613918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.614237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.614287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.614507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.614532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.614685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.614710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.614948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.615009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.615232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.615283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.615541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.615592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.615852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.615905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.616196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.616249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.616483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.616537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.616749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.616775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.616948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.616980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.617140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.617181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.617308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.617334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.617457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.617488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.617699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.617725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.617858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.617885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.618078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.618119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.618353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.618402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.618660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.618708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.618989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.619038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.619310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.619630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.619683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.619865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.619916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.620225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.620259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.620391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.712 [2024-07-16 01:18:18.620421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.712 qpair failed and we were unable to recover it. 00:25:02.712 [2024-07-16 01:18:18.620691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.620751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.621003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.621055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.621334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.621387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.621644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.621697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.621920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.621983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.622208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.622260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.622473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.622499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.622622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.622648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.622868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.622918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.623165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.623217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.623439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.623490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.623680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.623735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.623994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.624047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.624272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.624297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.624414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.624440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.624647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.624701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.624921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.624986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.625208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.625262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.625510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.625562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.625785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.625838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.626032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.626059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.626224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.626250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.626477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.626503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.626654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.626680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.626884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.626937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.627180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.627233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.627454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.627508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.627769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.627822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.628082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.628111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.628247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.628273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.628447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.628498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.628767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.628823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.713 qpair failed and we were unable to recover it. 00:25:02.713 [2024-07-16 01:18:18.629072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.713 [2024-07-16 01:18:18.629125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.629314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.629340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.629606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.629659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.629934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.630021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.630251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.630303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.630531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.630585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.630833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.630859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.630964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.630991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.631142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.631195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.631452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.631506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.631726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.631779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.632026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.632052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.632180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.632206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.632400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.632454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.632713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.632765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.633025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.633078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.633326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.633366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.633519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.633545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.633778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.633829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.634084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.634137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.634384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.634437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.634638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.634678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.634803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.634830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.635074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.635128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.635392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.635444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.635697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.635749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.636033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.636085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.636355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.636407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.636570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.636621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.636902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.636954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.714 qpair failed and we were unable to recover it. 00:25:02.714 [2024-07-16 01:18:18.637198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.714 [2024-07-16 01:18:18.637250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.637462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.637514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.637741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.637793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.637989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.638041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.638237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.638292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.638575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.638600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.638729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.638803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.639075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.639128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.639384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.639436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.639696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.639722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.639855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.639881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.640002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.640029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.640197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.640253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.640460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.640512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.640765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.640816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.641032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.641085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.641310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.641362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.641616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.641667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.641881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.641934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.642179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.642231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.642462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.642515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.642730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.642756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.642920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.642946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.643141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.643168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.643414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.643466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.643685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.643737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.643928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.644001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.644201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.644255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.644508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.644560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.644817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.644869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.645111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.645164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.645397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.715 [2024-07-16 01:18:18.645449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.715 qpair failed and we were unable to recover it. 00:25:02.715 [2024-07-16 01:18:18.645669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.645722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.645988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.646042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.646297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.646349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.646571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.646624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.646894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.646950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.647271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.647349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.647674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.647746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.647996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.648051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.648320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.648346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.648495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.648520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.648701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.648727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.648889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.648915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.649048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.649076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.649295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.649352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.649631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.649687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.649936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.650006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.650278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.650334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.650529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.650587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.650863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.650920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.651167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.651225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.651462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.651521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.651802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.651858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.652098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.652154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.652396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.652453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.652672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.652728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.652938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.653007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.653254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.653310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.653573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.653629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.653887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.653943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.654150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.654210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.654493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.716 [2024-07-16 01:18:18.654549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-16 01:18:18.654842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.654897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.655184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.655241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.655428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.655483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.655688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.655760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.655997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.656325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.656383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.656618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.656675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.656893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.656948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.657188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.657240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.657502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.657559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.657833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.657897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.658186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.658243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.658506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.658563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.658763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.658813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.659032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.659088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.659306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.659362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.659610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.659666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.659903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.659969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.660267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.660325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.660553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.660610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-16 01:18:18.660815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.717 [2024-07-16 01:18:18.660872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:02.717 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.661222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.661279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.661552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.661608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.661889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.661945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.662211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.662269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.662513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.662569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.662778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.662834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.663101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.663157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.663462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.663518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.663799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.663872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.664152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.664179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.664291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.664316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.664421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.664447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.664548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.664574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.664723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.664781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.665038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.665113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.665328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.665384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.665632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.665688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.665928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.665995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.666290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.666347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.666619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.666675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.666916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.666985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.667202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.667260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.667508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.667566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.667819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.667876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.668110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.668170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.668417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.668472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.668708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.668735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.668905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.668931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.669140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.669196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.669464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.669528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.669804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.669859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.670121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.670178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.670422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.670477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.670755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.670811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.671081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.671139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.671372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.671428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.671697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.671753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.672022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.672079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.672317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.672373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.672643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.672699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.672978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.673035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.003 qpair failed and we were unable to recover it. 00:25:03.003 [2024-07-16 01:18:18.673242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.003 [2024-07-16 01:18:18.673300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.673549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.673605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.673848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.673904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.674189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.674245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.674514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.674586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.674856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.674913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.675222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.675297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.675601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.675674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.675876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.675935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.676259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.676333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.676629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.676702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.676984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.677041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.677309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.677385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.677671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.677744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.678014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.678073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.678353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.678426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.678688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.678764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.679060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.679117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.679385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.679460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.679766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.679840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.680145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.680220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.680521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.680595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.680875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.680930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.681230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.681288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.681556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.681633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.681898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.681965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.682299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.682374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.682696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.682770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.683016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.683082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.683361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.683433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.683700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.683774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.684064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.684138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.684413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.684487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.684727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.684801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.685097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.685171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.685435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.685509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.685746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.685804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.686052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.686128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.686436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.686508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.686743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.686801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.687107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.687182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.687448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.687523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.687771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.687827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.688144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.688219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.688516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.688589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.688829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.688886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.689177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.689253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.689557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.689631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.689872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.689930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.690235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.690318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.690617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.690690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.690939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.691007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.691226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.691299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.691559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.004 [2024-07-16 01:18:18.691633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.004 qpair failed and we were unable to recover it. 00:25:03.004 [2024-07-16 01:18:18.691910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.691985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.692301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.692384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.692680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.692753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.693075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.693132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.693427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.693500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.693809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.693884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.694148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.694205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.694517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.694591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.694832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.694893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.695163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.695237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.695533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.695606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.695879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.695936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.696224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.696309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.696564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.696638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.696882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.696966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.697284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.697366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.697680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.697755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.698008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.698035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.698200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.698226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.698466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.698541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.698858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.698932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.699278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.699365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.699638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.699719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.699999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.700056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.700336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.700411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.700728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.700802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.701082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.701139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.701375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.701449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.701731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.701807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.702087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.702163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.702472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.702547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.702779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.702835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.703154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.703229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.703535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.703610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.703806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.703865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.704197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.704283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.704592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.704666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.704945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.705012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.705286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.705360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.705638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.705714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.705994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.706050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.706345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.706406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.706709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.706797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.707065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.707141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.707461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.707536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.707817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.707874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.708155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.708230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.708524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.708551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.708666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.708706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.708947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.709013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.709315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.709389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.709664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.709740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.710036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.710093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.710401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.710475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.005 qpair failed and we were unable to recover it. 00:25:03.005 [2024-07-16 01:18:18.710745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.005 [2024-07-16 01:18:18.710827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.711078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.711135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.711438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.711514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.711825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.711898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.712160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.712235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.712482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.712567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.712839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.712894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.713144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.713227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.713496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.713573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.713797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.713853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.714165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.714222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.714485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.714560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.714726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.714752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.714900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.714927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.715228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.715303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.715609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.715683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.715919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.715987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.716295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.716373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.716672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.716698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.716840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.716866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.717067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.717142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.717446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.717519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.717750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.717807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.718075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.718152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.718417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.718490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.718729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.718790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.719036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.719092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.719338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.719394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.719646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.719701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.719980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.720037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.720271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.720344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.720567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.720644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.720787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.720813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.721029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.721056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.721179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.721205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.721340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.721366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.721465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.721491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.721595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.721621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.721740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.721766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.721891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.721917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.722048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.722078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.722206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.722231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.722340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.722367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.722465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.722523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.006 [2024-07-16 01:18:18.722725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.006 [2024-07-16 01:18:18.722751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.006 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.722876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.722902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.723011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.723038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.723140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.723166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.723258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.723285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.723423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.723449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.723659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.723686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.723800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.723826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.723951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.723984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.724090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.724116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.724252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.724305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.724479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.724552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.724723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.724796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.724950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.724986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.725115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.725146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.725253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.725279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.725407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.725434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.725531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.725557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.725703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.725730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.725822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.725849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.725985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.726012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.726137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.726169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.726279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.726305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.726410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.726441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.726549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.726576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.726684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.726720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.726834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.726863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.726984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.727013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.727118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.727145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.727276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.727301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.727443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.727469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.727575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.727601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.727725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.727751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.727888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.727916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.728017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.728044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.728141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.728170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.728313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.728344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.728498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.728529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.728643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.728677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.728804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.728839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.728975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.729008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.729135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.729165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.729301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.729330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.729467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.729499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.729620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.729650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.729786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.729815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.729917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.729943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.730079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.730106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.730196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.730222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.731005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.731035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.731145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.731172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.731279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.731305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.731461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.731486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.007 [2024-07-16 01:18:18.731637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.007 [2024-07-16 01:18:18.731662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.007 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.731819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.731847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.731945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.731981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.732108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.732134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.732235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.732261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.732419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.732444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.732552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.732587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.732788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.732822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.732966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.733015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.733124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.733152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.733268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.733295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.733468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.733494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.733600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.733626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.733759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.733794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.733930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.733975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.734103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.734129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.734233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.734260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.734395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.734441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.734605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.734641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.734794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.734827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.734938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.734972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.735075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.735101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.735228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.735253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.735350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.735375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.735495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.735529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.735663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.735691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.735820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.735848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.735987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.736015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.736119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.736146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.736270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.736296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.736398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.736425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.736523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.736549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.736675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.736704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.736852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.736883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.737030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.737069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.737226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.737253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.737390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.737428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.737566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.737608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.737744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.737773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.737900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.737927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.738028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.738055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.738175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.738201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.738332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.738358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.738488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.738516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.738674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.738701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.738827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.738853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.739061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.739089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.739197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.739231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.739409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.739445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.739571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.739610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.739759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.739786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.739914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.739941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.740073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.740100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.740222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.740250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.740384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.008 [2024-07-16 01:18:18.740422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-16 01:18:18.740605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.740645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.740812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.740839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.740968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.740999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.741109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.741137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.741265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.741304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.741468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.741516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.741698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.741733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.741859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.741901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.742067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.742094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.742208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.742236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.742341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.742367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.742554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.742617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.742789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.742844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.743046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.743073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.743181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.743208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.743356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.743390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.743537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.743563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.743736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.743770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.743902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.743935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.744084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.744115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.744226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.744252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.744353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.744379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.744491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.744517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.744640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.744674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.744834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.744871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.745046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.745074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.745179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.745204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.745327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.745353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.745444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.745469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.745600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.745634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.745840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.745874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.746038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.746064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.746195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.746221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.746326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.746352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.746497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.746531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.746695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.746728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.746889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.746928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.747086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.747113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.747209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.747252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.747405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.747439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.747608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.747641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.747820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.747847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.748002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.748029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.748156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.748181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.748320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.748354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.748493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.748519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.748662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.748700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.748848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.748881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-16 01:18:18.749056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.009 [2024-07-16 01:18:18.749083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.749206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.749232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.749379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.749413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.749564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.749598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.749752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.749786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.749916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.749950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.750097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.750122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.750245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.750279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.750464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.750498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.750650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.750692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.750857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.750891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.751035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.751062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.751158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.751184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.751294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.751323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.751484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.751525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.751660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.751700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.751858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.751891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.752046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.752073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.752178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.752204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.752384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.752419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.752567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.752600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.752731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.752765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.752913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.752947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.753082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.753108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.753210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.753236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.753356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.753382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.753535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.753569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.753732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.753769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.753960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.753987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.754106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.754132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.754243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.754269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.754362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.754387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.754513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.754539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.754684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.754711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.754813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.754838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.754970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.754997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.755095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.755121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.755214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.755241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.755355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.755381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.755477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.755502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.755635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.755661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.755765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.755792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.755883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.755909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.756042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.756069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.756176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.756202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.756352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.756378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.756466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.756491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.756620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.756646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.757384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.757414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.757575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.757602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.757754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.757780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.757882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.757907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.758027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.758054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.758159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.758186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.758323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.010 [2024-07-16 01:18:18.758350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-16 01:18:18.758509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.758538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.758653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.758680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.758803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.758853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.759001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.759046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.759152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.759177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.759280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.759307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.759462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.759490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.759633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.759661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.759789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.759822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.759976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.760029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.760137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.760162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.760280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.760308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.760426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.760454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.760625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.760654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.760758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.760791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.760912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.760940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.761066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.761092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.761191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.761220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.761344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.761372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.761510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.761539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.761635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.761662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.761774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.761812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.761960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.762007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.762111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.762136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.762247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.762273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.762397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.762423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.762538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.762563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.762712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.762747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.762860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.762897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.763037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.763062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.763167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.763193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.763294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.763319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.763474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.763501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.763601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.763643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.763754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.763782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.763935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.764000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.764107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.764132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.764239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.764274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.764379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.764405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.764525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.764554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.764699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.764731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.764854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.764882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.765027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.765054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.765160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.765185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.765339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.765364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.765468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.765494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.765621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.765654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.765798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.765826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.765948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.766004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.766114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.766139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.766238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.766265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.766366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.766391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.766517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.766561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.766740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.766770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.011 [2024-07-16 01:18:18.766874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.011 [2024-07-16 01:18:18.766903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.011 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.767031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.767068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.767184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.767209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.767314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.767339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.767474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.767499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.767623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.767654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.767783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.767812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.767949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.767985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.768117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.768142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.768263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.768290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.768424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.768449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.768625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.768653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.768786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.768814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.768929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.768972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.769094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.769119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.769225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.769251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.769394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.769422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.769566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.769599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.769743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.769772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.770594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.770625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.770791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.770820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.770952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.771010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.771119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.771145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.771249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.771276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.771421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.771446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.771592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.771619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.771723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.771749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.771863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.771891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.772036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.772074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.772180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.772204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.772312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.772337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.772468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.772500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.772670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.772697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.772827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.772856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.773000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.773027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.773130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.773156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.773264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.773289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.773389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.773418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.773515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.773558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.773718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.773745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.773887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.773912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.774044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.774072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.774187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.774218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.774371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.774397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.774527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.774554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.774644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.774684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.774854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.774879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.775004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.775030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.775135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.775160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.775301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.775328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.775427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.775451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.012 qpair failed and we were unable to recover it. 00:25:03.012 [2024-07-16 01:18:18.775553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.012 [2024-07-16 01:18:18.775594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.775761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.775787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.775906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.775931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.776042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.776070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.776180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.776212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.776337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.776362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.776454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.776495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.776660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.776686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.776792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.776817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.776908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.776940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.777077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.777102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.777207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.777237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.777346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.777372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.777484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.777509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.777633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.777659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.777746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.777775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.777904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.777935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.778050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.778075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.778179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.778206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.778373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.778404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.778522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.778546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.778681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.778710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.778815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.778840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.778943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.778975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.779085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.779111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.779220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.779246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.779370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.779396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.779541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.779566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.779671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.779697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.779798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.779824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.779969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.779996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.780099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.780124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.780259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.780286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.780389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.780414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.780580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.780607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.780732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.780758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.780854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.780885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.781035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.781152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.781288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.781417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.781570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.781717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.781846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.781989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.782016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.782144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.782174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.782296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.782322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.782455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.782480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.782606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.782631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.782758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.782784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.782886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.782913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.783047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.013 [2024-07-16 01:18:18.783073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.013 qpair failed and we were unable to recover it. 00:25:03.013 [2024-07-16 01:18:18.783163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.783188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.783289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.783316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.783472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.783501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.783643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.783669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.783766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.783791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.783914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.783941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.784049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.784079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.784219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.784244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.784367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.784410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.784534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.784559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.784686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.784714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.784843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.784869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.784965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.784991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.785095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.785121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.785225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.785251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.785409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.785435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.785541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.785568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.785689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.785714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.785846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.785877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.786002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.786029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.786137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.786166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.786312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.786338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.786468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.786493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.786646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.786672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.786817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.786843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.786934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.786965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.787081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.787107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.787206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.787234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.787355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.787380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.787542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.787568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.787683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.787708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.787828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.787862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.787989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.788015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.788118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.788143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.788279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.788306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.788430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.788456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.788573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.788598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.788719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.788744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.788859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.788885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.789003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.789029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.789133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.789159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.789253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.789279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.789414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.789439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.789539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.789564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.789674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.789707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.789805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.789830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.790039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.790173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.790326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.790454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.790582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.790710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.790874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.790994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.791021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.014 [2024-07-16 01:18:18.791145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.014 [2024-07-16 01:18:18.791171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.014 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.791280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.791307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.791406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.791438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.791595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.791628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.791778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.791805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.791926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.791951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.792092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.792119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.792254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.792285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.792420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.792446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.792575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.792607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.792729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.792756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.792917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.792949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.793137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.793163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.793300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.793328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.793458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.793484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.793636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.793664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.793777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.793803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.793927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.793962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.794094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.794120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.794246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.794289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.794438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.794465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.794622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.794649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.794756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.794782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.794907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.794934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.795079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.795128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.795285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.795335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.795502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.795533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.795633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.795661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.795817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.795844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.795998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.796026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.796152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.796179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.796304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.796330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.796490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.796518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.796652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.796679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.796813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.796842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.797012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.797039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.797138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.797164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.797338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.797378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.797513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.797539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.797644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.797692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.797855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.797883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.798112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.798148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.798303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.798339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.798531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.798563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.798727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.798785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.798940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.798977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.799106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.799133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.799260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.799292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.799460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.799500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.799663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.799702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.799844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.799873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.800028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.800056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.800278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.800304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.800453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.800481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.800640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.015 [2024-07-16 01:18:18.800668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.015 qpair failed and we were unable to recover it. 00:25:03.015 [2024-07-16 01:18:18.801608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.801653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.801861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.801908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.802078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.802105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.802252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.802294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.802403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.802431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.802590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.802632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.802805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.802852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.803035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.803062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.803190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.803216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.803317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.803344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.803518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.803557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.803698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.803738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.803892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.803934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.804103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.804130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.804238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.804264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.804421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.804449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.804581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.804610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.804785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.804836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.805002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.805041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.805189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.805229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.805383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.805412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.805552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.805578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.805713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.805739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.805862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.805888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.806058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.806105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.806256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.806287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.806436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.806466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.806588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.806619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.806742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.806773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.806920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.806950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.807089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.807119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.807243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.807273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.807458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.807492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.807651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.807680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.807835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.807866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.807997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.808029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.808129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.808157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.808264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.808306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.808462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.808507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.808714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.808750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.808902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.808937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.809086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.809123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.809293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.809330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.809537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.809564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.809722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.809748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.809849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.809877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.809995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.810023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.016 [2024-07-16 01:18:18.810771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.016 [2024-07-16 01:18:18.810802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.016 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.811037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.811065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.811187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.811214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.811349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.811375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.811487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.811513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.811643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.811670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.811795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.811821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.811927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.811953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.812093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.812120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.812248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.812274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.812430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.812456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.812559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.812587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.812730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.812756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.812915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.812954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.813115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.813149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.813286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.813312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.813416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.813443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.813608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.813637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.813758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.813795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.813945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.813986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.814092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.814119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.814217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.814246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.814385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.814412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.814560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.814596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.814736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.814771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.814953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.815009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.815124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.815150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.815301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.815328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.815469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.815511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.815657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.815683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.815786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.815812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.815939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.815975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.816078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.816105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.816204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.816231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.816326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.816352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.816472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.816498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.816649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.816675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.816828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.816854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.816971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.816998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.817128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.817155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.817258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.817283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.817385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.817411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.817515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.817541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.817670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.817696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.817826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.817853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.817983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.818010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.818112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.818138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.818240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.818268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.818397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.818423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.818553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.818579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.818703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.818730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-07-16 01:18:18.818944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.017 [2024-07-16 01:18:18.818979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.819086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.819113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.819217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.819242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.819371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.819397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.819521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.819548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.819682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.819708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.819809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.819835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.819930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.819963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.820077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.820103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.820201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.820228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.820334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.820361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.820486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.820512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.820609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.820636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.820774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.820800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.820896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.820927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.821064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.821091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.821188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.821214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.821377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.821404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.821554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.821580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.821705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.821731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.821830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.821856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.821951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.821985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.822107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.822132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.822231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.822258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.822414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.822440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.822565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.822591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.822692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.822718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.822922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.822948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.823084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.823111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.823210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.823238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.823374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.823400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.823529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.823555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.823678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.823704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.823832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.823859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.823982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.824010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.824113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.824140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.824243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.824270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.824393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.824419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.824575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.824601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.824724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.824751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.824878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.824904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.825017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.825046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.825178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.825204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.825313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.825341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.825470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.825496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.825598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.825625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.825745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.825771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.825871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.825897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.826020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.826047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.826149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.826176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.826276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.826302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.826428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.826454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.826559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.826585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.826682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.826709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-07-16 01:18:18.826841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.018 [2024-07-16 01:18:18.826867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.827033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.827159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.827300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.827422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.827597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.827722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.827844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.827999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.828121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.828248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.828401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.828554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.828676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.828834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.828961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.828987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.829080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.829105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.829252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.829277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.829405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.829433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.829557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.829582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.829687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.829713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.829866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.829891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.830005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.830031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.830157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.830182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.830336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.830361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.830463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.830488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-07-16 01:18:18.830591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.019 [2024-07-16 01:18:18.830616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.830741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.830771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.830868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.830894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.831008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.831035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.831141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.831168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.831329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.831355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.831457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.831483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.831644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.831670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.831797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.831822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.831928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.831964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.832067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.832093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.832193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.832218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.832367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.832392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.832514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.832540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.832669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.832695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.832851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.832876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.832991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.833017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.833121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.833146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.833251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.833277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.833383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.020 [2024-07-16 01:18:18.833410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.020 qpair failed and we were unable to recover it. 00:25:03.020 [2024-07-16 01:18:18.833531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.833556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.833685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.833710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.833853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.833882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.834016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.834042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.834148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.834174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.834296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.834322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.834448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.834474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.834624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.834650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.834785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.834813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.834960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.834988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.835135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.835160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.835297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.835322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.835466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.835507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.835609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.835637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.835765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.835793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.835928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.835960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.836073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.836099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.836253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.836279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.836444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.836493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.836629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.836655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.836784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.836811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.836978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.837026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.837156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.837182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.837310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.837338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.837471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.837497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.837651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.837679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.837834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.837863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.837987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.838015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.838148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.838174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.838299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.838325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.838476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.838504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.838637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.838679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.838821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.838848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.839003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.839032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.839134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.839160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.839297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.839324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.839453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.839479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.839612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.839640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.839792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.839836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.840008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.840036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.840167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.840194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.840324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.840351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.840477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.840503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.840634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.840662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.840820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.840848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.840976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.841004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.841132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.841181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.841331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.841360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.841525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.841553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.841688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.841715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.841807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.841835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.841968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.841996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.842181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.842228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.842385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.021 [2024-07-16 01:18:18.842413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.021 qpair failed and we were unable to recover it. 00:25:03.021 [2024-07-16 01:18:18.842529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.842555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.842710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.842736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.842859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.842887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.843051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.843098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.843297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.843346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.843472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.843501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.843605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.843633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.843738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.843770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.843903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.843931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.844099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.844126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.844266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.844314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.844436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.844463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.844566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.844593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.844749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.844776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.844877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.844906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.845028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.845057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.845157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.845186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.845329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.845355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.845489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.845516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.845621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.845648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.845776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.845805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.845971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.846000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.846158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.846202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.846375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.846404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.846538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.846566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.846695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.846722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.846854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.846881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.847011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.847040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.847145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.847174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.847281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.847309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.847456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.847482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.847625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.847650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.847775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.847801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.847896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.847921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.848080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.848108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.848276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.848303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.848430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.848457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.848591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.848621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.848731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.848758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.848888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.848917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.849057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.849085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.849197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.849225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.849331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.849360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.849519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.849548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.849645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.849673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.849808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.849835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.849969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.849998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.850147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.850199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.850380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.850405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.850503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.850531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.850660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.850686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.850837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.850863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.850998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.851025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.851189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.022 [2024-07-16 01:18:18.851217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.022 qpair failed and we were unable to recover it. 00:25:03.022 [2024-07-16 01:18:18.851390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.851436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.851540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.851568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.851701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.851727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.851833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.851860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.852017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.852046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.852177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.852204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.852331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.852359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.852486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.852513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.852648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.852675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.852813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.852841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.852975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.853003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.853160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.853212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.853365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.853397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.853556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.853585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.853715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.853742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.853847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.853876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.854038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.854086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.854242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.854275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.854452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.854480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.854579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.854608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.854743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.854770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.854898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.854926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.855035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.855063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.855192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.855219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.855344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.855371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.855478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.855505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.855637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.855665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.855830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.855857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.856012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.856040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.856144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.856173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.856304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.856332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.856465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.856493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.856624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.856650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.856794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.856825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.856935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.856969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.857105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.857151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.857315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.857360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.857477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.857506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.857641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.857668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.857805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.857831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.857969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.857999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.858189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.858242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.858434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.858482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.858612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.858637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.858789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.858816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.858970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.858999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.859110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.023 [2024-07-16 01:18:18.859138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.023 qpair failed and we were unable to recover it. 00:25:03.023 [2024-07-16 01:18:18.859277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.859305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.859460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.859487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.859623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.859650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.859808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.859836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.859937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.859974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.860109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.860136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.860271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.860298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.860435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.860462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.860603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.860631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.860768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.860796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.860906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.860932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.861045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.861074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.861203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.861256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.861418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.861472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.861603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.861632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.861770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.861797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.861905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.861934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.862077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.862105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.862237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.862286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.862441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.862470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.862603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.862630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.862731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.862758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.862893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.862922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.863071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.863099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.863232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.863259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.863419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.863447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.863554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.863586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.863732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.863760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.863896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.863924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.864100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.864149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.864319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.864365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.864568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.864619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.864752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.864778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.864912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.864939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.865073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.865100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.865299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.865350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.865466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.865504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.865680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.865707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.865836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.865864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.866015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.866042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.866170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.866196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.866297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.866322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.866442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.866470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.866625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.866652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.866757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.866785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.866926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.866953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.867089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.867116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.867250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.867291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.867418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.867443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.867574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.867603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.867760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.867787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.867919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.867947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.868092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.868141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.868342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.868391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.024 [2024-07-16 01:18:18.868510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.024 [2024-07-16 01:18:18.868549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.024 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.868692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.868719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.868850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.868878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.869035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.869086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.869243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.869270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.869457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.869506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.869639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.869668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.869799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.869826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.869930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.869966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.870109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.870138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.870302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.870329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.870435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.870462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.870577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.870609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.870740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.870766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.870871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.870899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.871084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.871133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.871302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.871354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.871496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.871546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.871679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.871708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.871839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.871866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.872017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.872044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.872191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.872217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.872337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.872363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.872508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.872535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.872670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.872699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.872804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.872831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.872972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.873000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.873192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.873243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.873408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.873454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.873586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.873613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.873746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.873774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.873948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.873981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.874151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.874197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.874328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.874384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.874548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.874601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.874758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.874786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.874891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.874917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.875036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.875064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.875203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.875231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.875397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c53f0 is same with the state(5) to be set 00:25:03.025 [2024-07-16 01:18:18.875613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.875671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.875897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.875946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.876184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.876218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.876418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.876465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.876677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.876723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.876918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.876974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.877168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.877217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.877413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.877466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.877629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.877683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.877824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.877850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.878022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.878050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.878148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.878176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.878281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.878309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.878420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.878449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.878583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.878611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.025 qpair failed and we were unable to recover it. 00:25:03.025 [2024-07-16 01:18:18.878769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.025 [2024-07-16 01:18:18.878796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.878905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.878933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.879078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.879105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.879214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.879241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.879343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.879370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.879504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.879533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.879668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.879696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.879835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.879862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.880001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.880029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.880161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.880189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.880361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.880411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.880546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.880577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.880741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.880769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.880934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.880968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.881124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.881172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.881372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.881422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.881527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.881556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.881663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.881690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.881817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.881844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.882033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.882084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.882200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.882227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.882356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.882382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.882511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.882540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.882670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.882697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.882821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.882849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.882966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.882995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.883131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.883159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.883268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.883296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.883426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.883454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.883588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.883614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.883714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.883742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.883846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.883873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.883994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.884021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.884180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.884207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.884316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.884343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.884479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.884506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.884641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.884668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.884776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.884803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.884938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.884970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.885096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.885148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.885281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.885327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.885459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.885486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.885590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.885618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.885729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.885756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.885911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.885938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.886056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.886084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.886196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.886223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.886354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.886381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.886538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.886566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.886664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.886692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.886796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.886823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.886985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.887018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.887117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.887144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.887308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.887335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.026 qpair failed and we were unable to recover it. 00:25:03.026 [2024-07-16 01:18:18.887464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.026 [2024-07-16 01:18:18.887492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.887622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.887649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.887750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.887777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.887883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.887911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.888071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.888120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.888278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.888327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.888482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.888511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.888620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.888647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.888801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.888829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.888966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.888995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.889133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.889182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.889318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.889345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.889479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.889507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.889642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.889669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.889789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.889816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.889970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.889998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.890130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.890179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.890327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.890375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.890506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.890532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.890659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.890686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.890791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.890818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.890945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.890994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.891133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.891183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.891312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.891342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.891501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.891528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.891695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.891722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.891855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.891883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.892055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.892106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.892231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.892259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.892394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.892423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.892555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.892582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.892713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.892741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.892873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.892901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.893041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.893069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.893236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.893265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.893420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.893448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.893583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.893610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.893717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.893749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.893905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.893934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.894141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.894195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.894388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.894435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.894629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.894661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.894840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.894869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.894981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.895009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.895171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.895219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.895351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.895400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.895558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.895610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.895765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.027 [2024-07-16 01:18:18.895793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.027 qpair failed and we were unable to recover it. 00:25:03.027 [2024-07-16 01:18:18.895900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.895926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.896139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.896190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.896347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.896397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.896594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.896645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.896753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.896782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.896892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.896919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.897091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.897140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.897308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.897357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.897521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.897571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.897700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.897727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.897852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.897879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.898070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.898128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.898261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.898315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.898482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.898529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.898660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.898688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.898824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.898854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.898969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.898998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.899133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.899160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.899321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.899371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.899555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.899607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.899731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.899760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.899870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.899899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.900080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.900129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.900295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.900343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.900535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.900587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.900724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.900751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.900880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.900908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.901048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.901076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.901233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.901260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.901391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.901424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.901594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.901641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.901752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.901779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.901938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.901973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.902082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.902111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.902246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.902273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.902380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.902408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.902560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.902587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.902691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.902721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.902857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.902885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.903042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.903091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.903202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.903230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.903360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.903387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.903509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.903561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.903711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.903739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.903870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.903898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.904065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.904116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.904272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.904320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.904459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.904509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.904643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.904670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.904801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.904827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.905007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.905061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.905263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.905313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.905445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.905472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.905603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.028 [2024-07-16 01:18:18.905632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.028 qpair failed and we were unable to recover it. 00:25:03.028 [2024-07-16 01:18:18.905765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.905793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.905953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.905986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.906147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.906174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.906330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.906358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.906488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.906547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.906681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.906708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.906820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.906848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.907010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.907039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.907170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.907197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.907340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.907367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.907500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.907527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.907633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.907661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.907793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.907821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.907977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.908005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.908160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.908214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.908368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.908422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.908554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.908582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.908715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.908743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.908876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.908906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.909074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.909125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.909321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.909369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.909506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.909555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.909716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.909745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.909902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.909930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.910073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.910101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.910261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.910288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.910423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.910450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.910554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.910582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.910721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.910749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.910911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.910939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.911090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.911118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.911250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.911279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.911431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.911459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.911581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.911608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.911713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.911742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.911880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.911908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.912074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.912101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.912242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.912269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.912375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.912402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.912531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.912559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.912718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.912746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.912875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.912902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.913045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.913073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.913179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.913207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.913342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.913369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.913500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.913528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.913668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.913696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.913828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.913855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.913991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.914020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.914153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.914181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.914361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.914389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.914524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.914552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.914690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.914719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.914878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.914906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.029 [2024-07-16 01:18:18.915014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.029 [2024-07-16 01:18:18.915042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.029 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.915210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.915243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.915375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.915403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.915535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.915564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.915697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.915724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.915833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.915862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.916022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.916062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.916211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.916239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.916406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.916452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.916582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.916609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.916745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.916772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.916882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.916909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.917078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.917124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.917233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.917260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.917419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.917446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.917607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.917636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.917805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.917833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.917941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.917977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.918151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.918205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.918399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.918450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.918617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.918665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.918795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.918822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.918980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.919009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.919207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.919263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.919415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.919465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.919624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.919652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.919758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.919787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.919916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.919945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.920127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.920178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.920369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.920419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.920575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.920623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.920778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.920805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.920933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.920989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.921127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.921176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.921307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.921357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.921496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.921523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.921650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.921677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.921813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.921840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.921946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.921979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.922144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.922172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.922307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.922334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.922465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.922497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.922629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.922657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.922764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.922791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.922916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.922944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.923084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.923111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.923219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.030 [2024-07-16 01:18:18.923246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.030 qpair failed and we were unable to recover it. 00:25:03.030 [2024-07-16 01:18:18.923399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.923429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.923555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.923582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.923743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.923770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.923869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.923896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.924000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.924028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.924159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.924187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.924381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.924437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.924577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.924604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.924741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.924769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.924929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.924964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.925103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.925152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.925347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.925394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.925554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.925602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.925737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.925765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.925898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.925925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.926088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.926136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.926336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.926385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.926529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.926577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.926745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.926773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.926906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.926933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.927109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.927158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.927319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.927361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.927524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.927552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.927710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.927738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.927884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.927911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.928051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.928078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.928210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.928236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.928419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.928458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.928659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.928697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.928859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.928897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.929047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.929075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.929207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.929235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.929415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.929465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.929732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.929798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.930037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.930066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.930226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.930309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.930498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.930562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.930800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.930864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.931066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.931094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.931204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.931230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.931362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.931410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.931698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.931761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.931940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.931972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.932084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.932110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.932216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.932242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.932379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.932415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.932587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.932626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.932824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.932861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.933048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.933091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.933211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.933242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.933438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.933489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.933679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.933732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.933872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.933899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.934036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.031 [2024-07-16 01:18:18.934065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.031 qpair failed and we were unable to recover it. 00:25:03.031 [2024-07-16 01:18:18.934233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.934278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.934399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.934454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.934647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.934697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.934834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.934861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.935016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.935070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.935189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.935227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.935397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.935425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.935619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.935668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.935831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.935860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.936012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.936067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.936265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.936314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.936453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.936503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.936637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.936665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.936794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.936821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.936967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.936995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.937155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.937203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.937346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.937398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.937555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.937583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.937751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.937779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.937883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.937909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.938047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.938075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.938218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.938245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.938402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.938430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.938588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.938638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.938745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.938772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.938903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.938932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.939094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.939122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.939257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.939284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.939444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.939501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.939634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.939662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.939820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.939847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.939981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.940009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.940167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.940219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.940382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.940430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.940539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.940571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.940707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.940735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.940864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.940892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.940998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.941027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.941196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.941245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.941413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.941461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.941622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.941650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.941779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.941806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.941992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.942021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.942182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.942231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.942401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.942453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.942612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.942640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.942743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.942770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.942871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.942900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.943096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.943154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.943342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.943382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.943585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.943624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.943812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.943840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.943945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.943982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.032 qpair failed and we were unable to recover it. 00:25:03.032 [2024-07-16 01:18:18.944173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.032 [2024-07-16 01:18:18.944211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.944388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.944426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.944599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.944637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.944803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.944830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.945003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.945030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.945216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.945253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.945424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.945461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.945635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.945672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.945858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.945897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.946065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.946092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.946197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.946227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.946424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.946473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.946667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.946720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.946846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.946875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.947032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.947061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.947229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.947279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.947417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.947465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.947591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.947640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.947748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.947776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.947902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.947931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.948070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.948097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.948201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.948232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.948378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.948416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.948621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.948659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.948832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.948870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.949069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.949096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.949277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.949317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.949473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.949514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.949744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.949784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.949928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.949976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.950158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.950185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.950296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.950322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.950476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.950514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.950712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.950749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.950888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.950915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.951069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.951096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.951225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.951252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.951469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.951512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.951702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.951745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.951975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.952022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.952121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.952148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.952250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.952277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.952461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.952503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.952674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.952716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.952927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.952993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.953128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.953155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.953284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.953311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.033 [2024-07-16 01:18:18.953444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.033 [2024-07-16 01:18:18.953471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.033 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.953633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.953673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.953858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.953884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.954018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.954046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.954175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.954201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.954329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.954356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.954485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.954511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.954793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.954857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.955059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.955087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.955245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.955272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.955467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.955494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.955771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.955810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.955984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.956028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.956161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.956188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.956316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.956344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.956455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.956512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.956704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.956747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.956926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.956961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.957120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.957148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.957364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.957391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.957580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.957618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.957764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.957808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.957975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.958003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.958107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.958134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.958237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.958264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.958456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.958498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.958718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.958760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.958935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.958988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.959165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.959192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.959370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.959396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.959504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.959530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.959700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.959742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.959904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.959930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.960063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.960089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.960214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.960241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.960343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.960370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.960492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.960520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.960707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.960756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.960903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.960930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.961090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.961117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.961211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.961256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.961431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.961458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.961562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.961592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.961803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.961844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.962012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.962039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.962176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.962205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.962424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.962467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.962636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.962673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.962818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.962854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.963031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.963059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.034 [2024-07-16 01:18:18.963216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.034 [2024-07-16 01:18:18.963243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.034 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.963413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.963455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.963643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.963683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.963920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.963969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.964117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.964143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.964338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.964380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.964521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.964572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.964727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.964770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.964929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.965013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.965173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.965214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.965437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.965478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.965669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.965710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.965926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.965981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.966204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.966246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.966414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.966455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.966625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.966663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.966836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.966893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.967071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.967110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.967289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.967327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.967555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.967604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.967793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.967836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.968038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.968095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.968242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.968282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.968498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.968542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.968711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.968754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.968939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.968992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.969185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.969254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.969475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.969518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.969704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.969745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.969936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.969988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.970214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.970263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.970472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.970515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.970700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.970743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.970991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.971037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.971200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.971243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.971415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.971457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.035 [2024-07-16 01:18:18.971641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.035 [2024-07-16 01:18:18.971687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.035 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.971860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.971904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.972134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.972177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.972347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.972392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.972615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.972660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.972879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.972917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.973087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.973127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.973278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.973332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.973506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.973551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.973742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.973785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.973995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.974040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.974279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.974323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.974467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.974506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.312 qpair failed and we were unable to recover it. 00:25:03.312 [2024-07-16 01:18:18.974663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.312 [2024-07-16 01:18:18.974701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.974882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.974920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.975111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.975152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.975311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.975354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.975549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.975592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.975797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.975835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.976013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.976054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.976200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.976238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.976453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.976501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.976682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.976724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.976899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.976968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.977173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.977214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.977435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.977496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.977699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.977742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.977943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.978013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.978170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.978214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.978414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.978459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.978678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.978721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.978918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.978973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.979198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.979241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.979454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.979496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.979687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.979742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.979930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.979985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.980215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.980262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.980467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.980513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.980764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.980831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.981073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.981122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.981358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.981400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.981569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.981607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.981787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.981825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.982084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.982129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.982273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.982321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.982514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.982556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.982819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.982882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.983150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.983193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.313 [2024-07-16 01:18:18.983438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.313 [2024-07-16 01:18:18.983517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.313 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.983760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.983825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.984004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.984048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.984203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.984275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.984492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.984532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.984709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.984797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.984988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.985037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.985233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.985275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.985458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.985501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.985661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.985724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.985952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.986020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.986204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.986263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.986467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.986511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.986713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.986757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.986949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.987014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.987231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.987275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.987467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.987512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.987714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.987760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.987928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.987986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.988189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.988234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.988434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.988481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.988679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.988726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.988924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.988982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.989188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.989233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.989386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.989429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.989660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.989705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.989888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.989934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.990134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.990178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.990349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.990393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.990632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.990677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.990872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.990934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.991204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.991251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.991458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.991503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.991733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.991778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.992015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.992062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.992277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.992324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.314 [2024-07-16 01:18:18.992510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.314 [2024-07-16 01:18:18.992557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.314 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.992770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.992808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.993026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.993073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.993278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.993317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.993539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.993587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.993763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.993809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.994049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.994096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.994297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.994344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.994563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.994609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.994865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.994931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.995191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.995238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.995405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.995450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.995656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.995700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.995898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.995943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.996144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.996191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.996384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.996431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.996635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.996680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.996908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.996953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.997172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.997218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.997430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.997477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.997713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.997760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.997911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.997968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.998190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.998235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.998465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.998515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.998744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.998795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.999000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.999047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.999250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.999297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.999500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.999545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:18.999745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:18.999790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.000024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.000084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.000289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.000336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.000487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.000533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.000774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.000822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.001031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.001080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.001250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.001300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.001534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.001583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.315 [2024-07-16 01:18:19.001825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.315 [2024-07-16 01:18:19.001880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.315 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.002074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.002122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.002338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.002386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.002633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.002681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.002924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.002988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.003239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.003289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.003523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.003733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.003771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.003941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.003991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.004228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.004277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.004489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.004537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.004751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.004801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.005029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.005080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.005277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.005328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.005544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.005593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.005806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.005854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.006091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.006141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.006365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.006426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.006673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.006721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.006939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.006997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.007199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.007262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.007471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.007519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.007744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.007821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.008065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.008116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.008334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.008383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.008608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.008646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.008838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.008893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.009163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.009212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.009402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.009454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.009708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.009748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.009924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.009973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.010187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.010237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.010500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.010565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.010818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.010866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.011067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.011139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.011413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.316 [2024-07-16 01:18:19.011478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.316 qpair failed and we were unable to recover it. 00:25:03.316 [2024-07-16 01:18:19.011745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.011783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.011978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.012028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.012247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.012295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.012539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.012594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.012790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.012840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.013016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.013066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.013294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.013332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.013481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.013520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.013694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.013751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.013971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.014037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.014266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.014316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.014513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.014561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.014805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.014853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.015099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.015149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.015389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.015437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.015684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.015734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.015975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.016015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.016192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.016237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.016380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.016418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.016591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.016629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.016811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.016859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.017062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.017126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.017380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.017420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.017655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.017704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.017978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.018045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.018257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.018304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.018496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.317 [2024-07-16 01:18:19.018544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.317 qpair failed and we were unable to recover it. 00:25:03.317 [2024-07-16 01:18:19.018805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.018875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.019125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.019175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.019382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.019449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.019688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.019726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.020029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.020079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.020274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.020324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.020573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.020622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.020831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.020879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.021064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.021132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.021364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.021402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.021642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.021694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.021923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.021996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.022205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.022257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.022471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.022523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.022746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.022799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.023062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.023114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.023321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.023370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.023619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.023677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.023894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.024003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.024258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.024311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.024501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.024552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.024787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.024861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.025174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.025227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.025491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.025543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.025838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.025902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.026161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.026212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.026418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.026469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.026703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.026768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.027024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.027064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.027228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.027266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.027411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.027449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.027688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.027738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.028012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.028067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.028304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.028358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.028606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.028658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.028840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.318 [2024-07-16 01:18:19.028891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.318 qpair failed and we were unable to recover it. 00:25:03.318 [2024-07-16 01:18:19.029132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.029184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.029376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.029427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.029611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.029669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.029911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.029977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.030166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.030217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.030478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.030529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.030751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.031021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.031090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.031317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.031356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.031531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.031570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.031832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.031895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.032113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.032164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.032363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.032413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.032612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.032680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.032886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.032937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.033158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.033211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.033453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.033504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.033726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.033777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.033990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.034030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.034196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.034236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.034386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.034424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.034648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.034702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.035004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.035080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.035339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.035402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.035684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.035746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.036045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.036108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.036390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.036451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.036796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.036869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.037223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.037283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.037529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.037589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.037925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.038039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.038338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.038399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.038723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.038768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.038996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.039043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.039346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.039406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.039653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.319 [2024-07-16 01:18:19.039722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.319 qpair failed and we were unable to recover it. 00:25:03.319 [2024-07-16 01:18:19.040008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.040053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.040313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.040374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.040666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.040726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.041037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.041097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.041377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.041438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.041698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.041743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.041984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.042029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.042325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.042385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.042659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.042722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.043052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.043112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.043391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.043451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.043759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.043817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.044126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.044186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.044473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.044541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.044787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.044846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.045146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.045216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.045554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.045628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.045949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.046044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.046317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.046376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.046684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.046743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.047064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.047123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.047431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.047490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.047796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.047870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.048195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.048275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.048551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.048607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.048813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.048865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.049122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.049177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.049437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.049489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.049704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.049771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.050006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.050060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.050323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.050386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.050703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.050767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.051071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.051136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.051350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.051427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.051731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.320 [2024-07-16 01:18:19.051795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.320 qpair failed and we were unable to recover it. 00:25:03.320 [2024-07-16 01:18:19.052075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.052144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.052419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.052471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.052674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.052727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.052926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.052988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.053221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.053272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.053533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.053585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.053808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.053860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.054134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.054187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.054431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.054483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.054744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.054795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.055021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.055073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.055261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.055312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.055533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.055586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.055814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.055867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.056110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.056161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.056346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.056397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.056628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.056679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.056929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.056992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.057221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.057273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.057497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.057555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.057768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.057825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.058094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.058151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.058373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.058428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.058672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.058726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.058920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.058993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.059243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.059297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.059536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.059590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.059809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.059864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.060088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.060144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.060409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.060464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.060647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.060702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.060918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.060999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.061201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.061257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.061447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.061503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.061719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.061774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.321 [2024-07-16 01:18:19.062031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.321 [2024-07-16 01:18:19.062088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.321 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.062302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.062357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.062627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.062682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.062923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.062989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.063231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.063290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.063537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.063592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.063839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.063895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.064185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.064242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.064485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.064540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.064773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.064830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.065078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.065134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.065375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.065430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.065643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.065698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.065972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.066027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.066262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.066317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.066566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.066621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.066884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.066938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.067206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.067260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.067535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.067590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.067859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.067913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.068174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.068228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.068425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.068483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.068710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.068766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.069018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.069076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.069323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.069379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.069615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.069670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.069947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.070040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.322 [2024-07-16 01:18:19.070322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.322 [2024-07-16 01:18:19.070376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.322 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.070613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.070667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.070877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.070934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.071188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.071254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.071487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.071543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.071755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.071810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.072048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.072124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.072372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.072427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.072665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.072720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.072949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.073034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.073303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.073358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.073596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.073651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.073886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.073941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.074238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.074293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.074538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.074593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.074864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.074929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.075250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.075304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.075495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.075549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.075753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.075807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.076020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.076078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.076296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.076351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.076579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.076634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.076868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.076923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.077201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.077265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.077496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.077550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.077789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.077844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.078122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.078178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.078421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.078475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.078779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.078843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.079121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.079176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.079409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.079464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.079668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.079725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.079992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.080049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.080261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.080316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.080507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.080564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.080813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.323 [2024-07-16 01:18:19.080869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.323 qpair failed and we were unable to recover it. 00:25:03.323 [2024-07-16 01:18:19.081129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.081186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.081420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.081476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.081693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.081747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.082023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.082084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.082367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.082427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.082684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.082744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.082991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.083052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.083290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.083351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.083636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.083697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.083989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.084050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.084294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.084349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.084602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.084657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.084935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.085001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.085274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.085337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.085615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.085671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.085945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.086017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.086314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.086373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.086623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.086684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.086984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.087046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.087300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.087361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.087581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.087641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.087895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.087953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.088228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.088288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.088516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.088574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.088793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.088852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.089060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.089120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.089360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.089419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.089644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.089705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.089977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.090038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.090320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.090379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.090665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.090724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.091021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.091081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.091338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.091397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.091622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.091681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.091978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.092038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.324 qpair failed and we were unable to recover it. 00:25:03.324 [2024-07-16 01:18:19.092304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.324 [2024-07-16 01:18:19.092363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.092652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.092712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.092980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.093039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.093327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.093386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.093667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.093727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.094000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.094060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.094320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.094379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.094670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.094729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.094984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.095044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.095266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.095325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.095552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.095612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.095865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.095925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.096226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.096287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.096505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.096566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.096832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.096891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.097173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.097235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.097497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.097556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.097839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.097898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.098177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.098249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.098474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.098533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.098795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.098854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.099122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.099183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.099452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.099511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.099826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.099890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.100190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.100250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.100534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.100593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.100904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.101001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.101263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.101322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.101569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.101627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.101953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.102052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.102309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.102369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.102626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.102684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.102971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.103031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.103257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.103316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.103565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.103623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.325 [2024-07-16 01:18:19.103850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.325 [2024-07-16 01:18:19.103932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.325 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.104283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.104347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.104669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.104734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.105010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.105076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.105400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.105465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.105774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.105833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.106068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.106129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.106385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.106443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.106691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.106750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.106985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.107046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.107308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.107370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.107628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.107687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.107934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.108025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.108286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.108345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.108596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.108655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.108875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.108936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.109223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.109284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.109572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.109631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.109941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.110017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.110289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.110348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.110598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.110656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.110922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.111004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.111245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.111309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.111587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.111660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.111897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.111999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.112295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.112360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.112633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.112697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.112989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.113055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.113295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.113356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.113609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.113670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.113928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.114005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.114222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.114284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.114579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.114638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.114909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.114984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.115264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.115324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.115610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.115669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.326 [2024-07-16 01:18:19.115926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.326 [2024-07-16 01:18:19.116004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.326 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.116346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.116410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.116686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.116751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.117052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.117119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.117387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.117451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.117732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.117796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.118070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.118135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.118414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.118478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.118723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.118787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.119072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.119138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.119429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.119493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.119776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.119841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.120119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.120184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.120471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.120534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.120832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.120896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.121182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.121248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.121551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.121614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.121906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.121988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.122306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.122372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.122644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.122707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.122936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.123020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.123290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.123354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.123628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.123691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.123921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.124002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.124328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.124393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.124713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.124777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.125046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.125112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.125423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.125496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.125767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.125831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.327 [2024-07-16 01:18:19.126095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.327 [2024-07-16 01:18:19.126159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.327 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.126398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.126461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.126689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.126753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.127040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.127108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.127437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.127501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.127774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.127838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.128150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.128215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.128493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.128558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.128804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.128868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.129160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.129225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.129533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.129598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.129873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.129936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.130291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.130356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.130661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.130726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.131003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.131069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.131357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.131421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.131733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.131797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.132064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.132129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.132443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.132507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.132746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.132814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.133111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.133177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.133455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.133519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.133732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.133794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.134034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.134099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.134404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.134469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.134790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.134855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.135111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.135176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.135434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.135498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.135770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.135835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.136122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.136192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.136509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.136574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.136854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.136918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.137209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.137275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.137599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.137663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.137902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.137979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.138290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.328 [2024-07-16 01:18:19.138354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.328 qpair failed and we were unable to recover it. 00:25:03.328 [2024-07-16 01:18:19.138628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.138691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.138976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.139044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.139288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.139363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.139681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.139745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.140015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.140083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.140376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.140439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.140760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.140824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.141126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.141191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.141441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.141505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.141778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.141841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.142154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.142219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.142485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.142549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.142822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.142885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.143143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.143208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.143524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.143589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.143899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.143987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.144326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.144391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.144632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.144695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.144996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.145064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.145378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.145442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.145765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.145827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.146132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.146198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.146513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.146579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.146847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.146911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.147206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.147272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.147587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.147650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.147932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.148011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.148245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.148309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.148583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.148646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.148899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.148980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.149268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.149333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.149599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.149662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.149973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.150038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.150283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.150347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.150619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.150682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.150949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.151031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.151310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.151377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.151699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.329 [2024-07-16 01:18:19.151763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.329 qpair failed and we were unable to recover it. 00:25:03.329 [2024-07-16 01:18:19.152046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.152114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.152438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.152503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.152769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.152833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.153152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.153219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.153539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.153612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.153844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.153915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.154212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.154276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.154586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.154650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.154981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.155047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.155367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.155431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.155667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.155731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.155988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.156057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.156333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.156398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.156666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.156733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.157053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.157119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.157398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.157462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.157738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.157802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.158080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.158145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.158445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.158510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.158782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.158848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.159181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.159247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.159528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.159591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.159865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.159931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.160240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.160308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.160604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.160669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.160905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.160994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.161246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.161310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.161615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.161679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.161952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.162038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.162357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.162422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.162653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.162719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.163003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.163071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.163394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.163458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.163757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.163820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.164075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.164140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.164413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.164478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.164710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.164773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.165018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.165083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.165402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.165466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.165734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.165799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.166046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.330 [2024-07-16 01:18:19.166111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.330 qpair failed and we were unable to recover it. 00:25:03.330 [2024-07-16 01:18:19.166390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.166455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.166738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.166802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.167077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.167143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.167456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.167531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.167816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.167880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.168224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.168290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.168565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.168629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.168940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.169020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.169271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.169335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.169603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.169668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.169984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.170050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.170355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.170419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.170726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.170791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.171072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.171138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.171417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.171482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.171791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.171856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.172149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.172213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.172510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.172575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.172839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.172905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.173189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.173253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.173531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.173599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.173906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.173985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.174216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.174283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.174598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.174663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.174995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.175062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.175314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.175378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.175648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.175713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.175949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.176063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.176355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.176420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.176698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.176762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.177049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.177115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.177434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.177499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.177733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.177796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.178082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.178147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.178431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.178495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.178777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.178841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.179153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.179219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.179496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.179560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.179821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.179884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.180179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.180245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.331 [2024-07-16 01:18:19.180497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.331 [2024-07-16 01:18:19.180561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.331 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.180877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.180941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.181278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.181341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.181564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.181639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.181952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.182029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.182319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.182383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.182651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.182715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.182999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.183066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.183370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.183435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.183721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.184058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.184126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.184439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.184503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.184775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.184839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.185155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.185220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.185465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.185532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.185821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.185888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.186181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.186248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.186574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.186638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.186894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.186976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.187257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.187323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.187617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.187682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.187987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.188054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.188335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.188399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.188653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.188717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.189025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.189091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.189328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.189393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.189679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.189744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.190050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.190116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.190395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.190460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.190708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.190774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.191130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.191197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.191517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.191582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.191848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.191913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.192239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.192305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.332 [2024-07-16 01:18:19.192629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.332 [2024-07-16 01:18:19.192694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.332 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.193019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.193087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.193374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.193441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.193716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.193781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.194070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.194136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.194425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.194492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.194804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.194869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.195132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.195199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.195487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.195551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.195846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.195923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.196296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.196361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.196638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.196702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.197033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.197100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.197426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.197492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.197764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.197828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.198131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.198197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.198479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.198545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.198847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.198911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.199214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.199280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.199571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.199636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.199914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.200012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.200334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.200399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.200724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.200790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.201085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.201151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.201401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.201468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.201703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.201770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.202052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.202117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.202357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.202423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.202700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.202766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.203054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.203120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.203406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.203470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.203782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.203846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.204147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.204213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.204493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.204558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.204841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.204904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.205190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.205255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.205602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.205667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.205945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.206034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.206296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.206361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.206632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.206695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.206986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.333 [2024-07-16 01:18:19.207055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.333 qpair failed and we were unable to recover it. 00:25:03.333 [2024-07-16 01:18:19.207336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.207401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.207719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.207783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.208067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.208134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.208426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.208491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.208765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.208832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.209115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.209181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.209410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.209476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.209767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.209831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.210114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.210189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.210461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.210528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.210832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.210897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.211225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.211288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.211508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.211570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.211881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.211946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.212244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.212308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.212585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.212650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.212921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.213006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.213290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.213355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.213631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.213695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.213983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.214049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.214288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.214353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.214632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.214696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.214947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.215032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.215317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.215382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.215621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.215688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.215926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.216036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.216317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.216381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.216621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.216685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.216979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.217046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.217375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.217439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.217724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.217788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.218096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.218163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.218428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.218490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.218798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.218862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.219198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.219263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.219636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.219727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.220133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.220214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.220536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.220614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.220977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.221054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.221411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.221487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.221857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.334 [2024-07-16 01:18:19.221932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.334 qpair failed and we were unable to recover it. 00:25:03.334 [2024-07-16 01:18:19.222333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.222408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.222742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.222817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.223220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.223297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.223679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.223753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.224112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.224191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.224507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.224583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.224927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.225032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.225405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.225480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.225882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.225970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.226369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.226442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.226829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.226903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.227252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.227327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.227695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.227769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.228119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.228196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.228577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.228649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.228998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.229075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.229429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.229506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.229878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.229952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.230307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.230381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.230721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.230794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.231190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.231266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.231610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.231685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.232030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.232105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.232409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.232494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.232837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.232911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.233321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.233395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.233729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.233802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.234187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.234261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.234603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.234678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.234987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.235064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.235455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.235529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.235914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.236012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.236339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.236413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.236780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.236853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.237193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.237280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.237655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.237728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.238081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.238158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.238506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.238582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.238985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.239062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.239441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.239515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.335 qpair failed and we were unable to recover it. 00:25:03.335 [2024-07-16 01:18:19.239868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.335 [2024-07-16 01:18:19.239943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.240341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.240414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.240804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.240879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.241250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.241324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.241696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.241769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.242111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.242187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.242530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.242604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.242945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.243049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.243446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.243520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.243829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.243911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.244276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.244350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.244686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.244761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.245118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.245195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.245545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.245619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.245997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.246073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.246420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.246495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.246854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.246928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.247342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.247417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.247787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.247863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.248261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.248336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.248644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.248733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.249118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.249193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.249560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.249634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.249933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.250032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.250382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.250456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.250826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.250902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.251265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.251341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.251713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.251786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.252155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.252231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.252597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.252670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.253017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.253094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.253423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.253497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.253882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.253970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.254327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.254403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.254769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.336 [2024-07-16 01:18:19.254856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.336 qpair failed and we were unable to recover it. 00:25:03.336 [2024-07-16 01:18:19.255237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.255314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.255660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.255734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.256079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.256155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.256526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.256599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.256937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.257031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.257378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.257455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.257824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.257899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.258259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.258333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.258674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.258749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.259111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.259187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.259534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.259610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.259975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.260055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.260423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.260498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.260840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.260914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.261308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.261383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.261764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.261839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.262159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.262234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.262584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.262659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.263018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.263094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.263467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.263541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.263909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.264004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.264347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.264421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.264756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.264830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.265193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.265267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.265607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.265682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.266023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.266100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.266462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.266536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.266869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.266944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.267302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.267378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.267751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.267826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.268223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.268299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.268642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.268717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.269070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.269162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.269547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.269622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.269992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.270066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.270438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.270513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.270865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.270942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.271356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.271432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.271806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.271880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.272265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.272350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.272721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.337 [2024-07-16 01:18:19.272796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.337 qpair failed and we were unable to recover it. 00:25:03.337 [2024-07-16 01:18:19.273153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.273228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.273595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.273669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.274035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.274110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.274487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.274562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.274908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.275000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.275315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.275405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.275775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.275849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.276239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.276316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.276662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.276736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.277052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.277129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.277467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.277544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.277919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.278017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.278410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.278485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.278830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.278906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.279319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.279393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.279717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.279791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.280203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.280278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.280664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.280738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.281109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.281185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.281539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.281615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.281998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.282073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.282407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.282482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.282807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.282880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.283261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.283361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.283691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.283761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.284064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.284133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.284451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.284517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.284797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.284862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.285203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.285270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.285583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.285647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.285921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.286009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.286301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.286368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.286684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.286750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.287062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.287129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.287449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.287515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.287788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.287853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.288184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.288250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.288533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.288600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.338 [2024-07-16 01:18:19.288880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.338 [2024-07-16 01:18:19.288971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.338 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.289255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.289321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.289614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.289683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.290011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.290081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.290413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.290479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.290806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.290871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.291197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.291265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.291603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.291669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.291919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.292004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.292258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.292326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.292594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.292659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.292946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.293032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.293328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.293393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.293681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.293748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.294007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.294075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.294360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.294426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.294645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.294711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.295031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.295099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.295332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.295399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.295666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.295732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.296034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.296101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.296358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.296423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.296649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.628 [2024-07-16 01:18:19.296718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.628 qpair failed and we were unable to recover it. 00:25:03.628 [2024-07-16 01:18:19.296986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.297054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.297339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.297403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.297644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.297711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.297985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.298052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.298355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.298421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.298702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.298768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.299060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.299132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.299416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.299481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.299723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.299789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.300038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.300105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.300379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.300444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.300692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.300760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.301077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.301146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.301389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.301458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.301777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.301843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.302148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.302215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.302504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.302569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.302848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.302926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.303260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.303326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.303603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.303668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.303994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.304060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.304344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.304409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.304660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.304726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.304982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.305050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.305335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.629 [2024-07-16 01:18:19.305401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.629 qpair failed and we were unable to recover it. 00:25:03.629 [2024-07-16 01:18:19.305673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.306016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.306083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.306314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.306380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.306635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.306702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.306991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.307057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.307298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.307366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.307655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.307721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.308008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.308075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.308314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.308379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.308653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.308719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.309009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.309077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.309354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.309419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.309697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.309765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.310170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.310239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.310529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.310598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.310855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.310922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.311250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.311317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.311647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.311712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.311971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.312039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.312318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.312385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.312650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.312716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.312988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.313055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.313344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.313410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.313727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.313793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.314024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.314094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.314377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.314442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.314775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.314841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.315093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.315160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.630 [2024-07-16 01:18:19.315441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.630 [2024-07-16 01:18:19.315506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.630 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.315792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.315857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.316194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.316261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.316540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.316605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.316922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.317013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.317297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.317366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.317613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.317678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.317974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.318041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.318286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.318350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.318629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.318696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.319031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.319100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.319347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.319413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.319689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.319755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.320041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.320110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.320389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.320458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.320733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.320799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.321065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.321135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.321364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.321430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.321746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.321813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.322128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.322195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.322481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.322551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.322829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.322895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.323189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.323256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.631 qpair failed and we were unable to recover it. 00:25:03.631 [2024-07-16 01:18:19.323567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.631 [2024-07-16 01:18:19.323634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.323909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.324007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.324289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.324355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.324630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.324699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.324990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.325060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.325298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.325365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.325642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.325707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.325991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.326059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.326363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.326429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.326754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.326820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.327110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.327176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.327497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.327563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.327859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.327925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.328210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.328277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.328606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.328671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.328992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.329059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.329292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.329360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.329668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.329734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.330017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.330086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.330407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.330472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.330793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.330860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.331161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.331239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.331510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.331576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.331856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.331921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.332238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.332306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.332584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.332654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.332980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.333046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.333359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.333424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.632 qpair failed and we were unable to recover it. 00:25:03.632 [2024-07-16 01:18:19.333735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.632 [2024-07-16 01:18:19.333801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.334079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.334146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.334382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.334448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.334740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.334806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.335100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.335167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.335422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.335488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.335771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.335836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.336135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.336201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.336448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.336516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.336788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.336853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.337174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.337240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.337484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.337550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.337791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.337859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.338112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.338152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.338377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.338444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.338765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.338831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.339108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.339148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.339285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.339352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.339628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.339697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.340022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.340061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.340261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.340319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.340631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.341019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.341057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.341273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.341337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.341571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.633 [2024-07-16 01:18:19.341637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.633 qpair failed and we were unable to recover it. 00:25:03.633 [2024-07-16 01:18:19.341917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.342011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.342160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.342198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.342364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.342402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.342555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.342616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.342921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.343010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.343186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.343225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.343554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.343622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.343938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.344022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.344189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.344233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.344523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.344591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.344889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.344971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.345174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.345211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.345456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.345523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.345768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.345837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.346112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.346150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.346317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.346354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.346528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.346602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.634 [2024-07-16 01:18:19.346851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.634 [2024-07-16 01:18:19.346916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.634 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.347183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.347220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.347468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.347532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.347834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.347898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.348141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.348180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.348447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.348484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.348724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.348790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.349112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.349179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.349480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.349545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.349859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.349923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.350243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.350310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.350605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.350670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.350947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.351029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.351301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.351338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.351550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.351617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.351890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.351974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.352260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.352326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.352620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.352685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.353026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.353094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.353398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.353436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.353623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.353660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.353944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.354031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.354312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.354382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.354654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.354719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.355040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.355106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.355417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.355484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.355723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.355788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.356104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.356175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.356485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.635 [2024-07-16 01:18:19.356550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.635 qpair failed and we were unable to recover it. 00:25:03.635 [2024-07-16 01:18:19.356872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.356938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.357250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.357288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.357483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.357560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.357818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.357884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.358176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.358245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.358494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.358559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.358850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.358915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.359254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.359320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.359606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.359670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.359991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.360059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.360371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.360437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.360727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.360791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.361081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.361147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.361459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.361524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.361798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.361866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.362176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.362242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.362495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.362560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.362805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.362875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.363203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.363271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.363544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.363613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.363882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.363948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.364252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.364320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.364567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.364604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.364770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.364847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.365176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.365243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.365521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.365587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.365868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.365936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.366231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.366269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.636 [2024-07-16 01:18:19.366435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.636 [2024-07-16 01:18:19.366474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.636 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.366648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.366688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.367027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.367097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.367369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.367437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.367733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.367799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.368126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.368193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.368472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.368537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.368832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.368897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.369187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.369253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.369484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.369552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.369877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.369945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.370247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.370315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.370639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.370704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.371012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.371085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.371374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.371449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.371718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.371786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.372038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.372108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.372381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.372447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.372760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.372825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.373129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.373197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.373441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.373507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.373818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.373883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.374181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.374248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.374487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.374555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.374865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.374929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.637 qpair failed and we were unable to recover it. 00:25:03.637 [2024-07-16 01:18:19.375203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.637 [2024-07-16 01:18:19.375270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.375549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.375617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.375887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.375952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.376246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.376311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.376557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.376625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.376930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.376980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.377188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.377261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.377486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.377551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.377877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.377943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.378208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.378275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.378544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.378614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.378931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.379016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.379321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.379361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.379517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.379555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.379865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.379930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.380243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.380310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.380598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.380664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.380983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.381057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.381369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.381436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.381758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.381824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.382099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.382166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.382441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.382507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.382835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.382903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.383231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.383297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.383566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.383631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.383905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.383987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.384338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.638 [2024-07-16 01:18:19.384404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.638 qpair failed and we were unable to recover it. 00:25:03.638 [2024-07-16 01:18:19.384638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.384707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.385003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.385069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.385318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.385394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.385705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.385771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.386053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.386124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.386436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.386503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.386788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.386854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.387123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.387162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.387408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.387474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.387785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.387851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.388148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.388222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.388497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.388564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.388852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.388917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.389259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.389324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.389600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.389669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.389941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.390029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.390330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.390395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.390678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.390744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.391080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.391148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.391435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.391500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.391767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.391805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.392044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.392111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.392354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.392421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.392731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.392798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.393083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.639 [2024-07-16 01:18:19.393148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.639 qpair failed and we were unable to recover it. 00:25:03.639 [2024-07-16 01:18:19.393482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.393547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.393828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.393894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.394199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.394265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.394510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.394576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.394909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.395024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.395332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.395401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.395693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.395759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.396038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.396103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.396410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.396474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.396721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.396785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.397087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.397153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.397463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.397527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.397803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.397866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.398171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.398239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.398562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.398627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.398903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.398983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.399313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.399376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.399708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.399771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.400061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.400127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.400440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.400504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.400799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.400835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.400953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.400998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.401169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.401205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.401490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.401526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.401717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.401783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.402034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.402102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.402411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.402475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.402756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.402820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.403096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.403161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.403437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.640 [2024-07-16 01:18:19.403501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.640 qpair failed and we were unable to recover it. 00:25:03.640 [2024-07-16 01:18:19.403828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.403892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.404232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.404309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.404594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.404659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.404896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.404977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.405267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.405334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.405647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.405712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.405996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.406063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.406339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.406376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.406616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.406680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.406978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.407044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.407327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.407391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.407722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.407787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.408078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.408142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.408420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.408485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.408770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.408835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.409137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.409203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.409459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.409524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.409839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.409904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.410188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.410253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.410573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.410637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.410917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.410997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.411275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.411339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.411662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.411727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.412038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.641 [2024-07-16 01:18:19.412105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.641 qpair failed and we were unable to recover it. 00:25:03.641 [2024-07-16 01:18:19.412347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.412412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.412715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.412780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.413095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.413160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.413438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.413504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.413818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.413893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.414220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.414284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.414580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.414617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.414785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.414857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.415144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.415208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.415498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.415561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.415846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.415910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.416250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.416314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.416590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.416656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.416903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.416985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.417234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.417301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.417622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.417686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.417986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.418052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.418318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.418385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.418717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.418781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.419114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.419180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.419455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.419518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.419838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.419903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.420194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.420259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.420525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.420588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.420842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.420906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.421198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.421262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.421519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.421586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.421865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.421928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.422265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.422330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.422606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.422642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.422800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.422837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.423083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.423149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.423401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.642 [2024-07-16 01:18:19.423475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.642 qpair failed and we were unable to recover it. 00:25:03.642 [2024-07-16 01:18:19.423756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.423820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.424077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.424144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.424363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.424400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.424567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.424636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.424903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.424983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.425309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.425374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.425656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.425719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.425999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.426064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.426389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.426454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.426730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.426793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.427109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.427176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.427445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.427511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.427776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.427813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.427986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.428032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.428195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.428232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.428541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.428605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.428888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.428952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.429285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.429349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.429720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.429784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.430070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.430133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.430431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.430479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.430676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.430742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.431018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.431055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.431283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.431346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.431591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.431654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.431900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.431991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.432321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.432385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.432664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.432727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.433045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.433111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.433432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.433495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.433800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.433863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.434152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.643 [2024-07-16 01:18:19.434216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.643 qpair failed and we were unable to recover it. 00:25:03.643 [2024-07-16 01:18:19.434500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.434564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.434834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.434900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.435201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.435266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.435561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.435624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.435907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.435982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.436318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.436381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.436652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.436715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.436993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.437072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.437355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.437419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.437728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.437790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.438113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.438179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.438451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.438515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.438817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.438854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.439024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.439061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.439227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.439298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.439601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.439665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.439934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.440036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.440355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.440418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.440739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.440802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.441114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.441179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.441489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.441553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.441872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.441908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.442071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.442108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.442407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.442470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.442784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.442848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.443164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.443230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.443528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.443564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.443732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.443768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.443902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.443939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.444145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.644 [2024-07-16 01:18:19.444209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.644 qpair failed and we were unable to recover it. 00:25:03.644 [2024-07-16 01:18:19.444515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.444578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.444883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.444948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.445245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.445282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.445484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.445547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.445791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.445867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.446164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.446201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.446350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.446387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.446556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.446592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.446779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.446842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.447156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.447222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.447504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.447566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.447876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.447940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.448295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.448359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.448629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.448693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.448975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.449040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.449363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.449426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.449705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.449769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.450039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.450104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.450428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.450492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.450815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.450879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.451221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.451286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.451556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.451619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.451902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.452000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.452281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.452344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.452621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.452684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.452976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.453040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.453314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.453350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.453495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.453531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.453815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.453879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.454216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.645 [2024-07-16 01:18:19.454281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.645 qpair failed and we were unable to recover it. 00:25:03.645 [2024-07-16 01:18:19.454601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.454664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.454950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.455044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.455355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.455420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.455687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.455750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.456042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.456107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.456384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.456448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.456756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.456820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.457103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.457168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.457445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.457482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.457664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.457730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.457999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.458037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.458229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.458294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.458612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.458675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.458962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.459000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.459198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.459278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.459628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.459692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.460004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.460069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.460390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.460454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.460732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.460796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.461051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.461116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.461366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.461429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.461701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.461764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.462053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.462090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.462212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.462249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.462394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.462430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.462703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.462739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.462910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.462947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.463292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.646 [2024-07-16 01:18:19.463355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.646 qpair failed and we were unable to recover it. 00:25:03.646 [2024-07-16 01:18:19.463635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.463698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.464003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.464069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.464344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.464410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.464721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.464785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.465097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.465162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.465418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.465482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.465790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.465855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.466187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.466224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.466361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.466418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.466732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.466796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.467091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.467129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.467286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.467322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.467456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.467492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.467641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.467678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.468004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.468070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.468375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.468438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.468692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.468753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.469039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.469105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.469397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.469461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.469772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.469835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.470175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.470247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.470529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.470604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.647 [2024-07-16 01:18:19.470914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.647 [2024-07-16 01:18:19.470951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.647 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.471128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.471165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.471299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.471336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.471478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.471535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.471859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.471926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.472386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.472451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.472693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.472761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.473074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.473122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.473274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.473314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.473483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.473562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.473831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.473897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.474197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.474262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.474567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.474605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.474797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.474854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.475158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.475229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.475493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.475567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.475835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.475904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.476242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.476309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.476586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.476650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.476980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.477056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.477343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.477408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.477651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.477688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.477885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.477952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.478266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.478333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.478660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.478727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.479059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.479125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.479372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.479436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.479708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.479774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.648 [2024-07-16 01:18:19.480090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.648 [2024-07-16 01:18:19.480156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.648 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.480486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.480551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.480834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.480909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.481226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.481291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.481601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.481665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.481909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.481988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.482288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.482354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.482662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.482729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.483055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.483122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.483402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.483465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.483754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.483836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.484155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.484221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.484532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.484596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.484866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.484930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.485261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.485340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.485620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.485685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.485953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.486041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.486347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.486412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.486692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.486769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.487066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.487133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.487419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.487483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.487801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.487864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.488198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.488266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.488575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.488613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.488811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.488870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.489163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.489228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.489536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.489600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.489939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.490040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.649 [2024-07-16 01:18:19.490320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.649 [2024-07-16 01:18:19.490357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.649 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.490509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.490585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.490862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.490922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.491229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.491266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.491444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.491514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.491845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.491911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.492261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.492326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.492555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.492618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.492931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.493029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.493364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.493429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.493709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.493773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.494058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.494124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.494375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.494440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.494764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.494832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.495082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.495148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.495454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.495518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.495801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.495865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.496196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.496263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.496557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.496621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.496934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.497013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.497339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.497402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.497745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.497812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.498102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.498169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.498401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.498470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.498785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.498849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.499190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.499258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.499549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.499612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.499916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.499995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.500319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.500384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.500700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.500781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.650 [2024-07-16 01:18:19.501111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.650 [2024-07-16 01:18:19.501178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.650 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.501460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.501525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.501799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.501864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.502223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.502291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.502598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.502662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.502993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.503059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.503367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.503431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.503754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.503835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.504194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.504259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.504581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.504646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.504924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.505012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.505343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.505410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.505726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.505790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.506102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.506167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.506491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.506555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.506896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.506996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.507254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.507319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.507589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.507653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.507910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.507996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.508258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.508325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.508661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.508727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.509005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.509072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.509357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.509421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.509739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.509805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.510111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.510179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.510466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.510531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.510862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.510925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.511251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.511314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.511615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.511692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.512019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.512085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.651 qpair failed and we were unable to recover it. 00:25:03.651 [2024-07-16 01:18:19.512356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.651 [2024-07-16 01:18:19.512422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.512704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.512769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.513004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.513072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.513370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.513435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.513718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.513782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.514100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.514165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.514437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.514503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.514835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.514903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.515230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.515295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.515602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.515665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.515970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.516038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.516359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.516424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.516683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.516747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.517061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.517127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.517403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.517466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.517805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.517872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.518186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.518252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.518562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.518626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.518978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.519045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.519296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.519363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.519639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.519703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.519983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.520048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.520373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.520437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.520763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.520830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.521106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.521173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.521495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.521570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.521899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.521982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.522349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.522416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.522704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.522768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.523050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.523114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.523404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.523469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.523753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.652 [2024-07-16 01:18:19.523833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.652 qpair failed and we were unable to recover it. 00:25:03.652 [2024-07-16 01:18:19.524151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.524217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.524525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.524589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.524870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.524933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.525256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.525334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.525662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.525726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.526036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.526104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.526386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.526450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.526784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.526860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.527167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.527233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.527511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.527575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.527893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.527972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.528260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.528327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.528618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.528684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.529006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.529072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.529395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.529458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.529737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.529808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.530122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.530190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.530473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.530536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.530845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.530910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.531244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.531319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.531627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.531703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.532033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.532098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.532327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.532391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.653 qpair failed and we were unable to recover it. 00:25:03.653 [2024-07-16 01:18:19.532670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.653 [2024-07-16 01:18:19.532734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.532988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.533072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.533366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.533429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.533714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.533777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.534104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.534170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.534423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.534499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.534803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.534881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.535050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.535086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.535222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.535257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.535422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.535456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.535625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.535659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.535803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.535836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.535973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.536009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.536170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.536205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.536366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.536400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.536555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.536590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.536750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.536785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.536930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.536983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.537124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.537157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.537311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.537344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.537532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.537568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.537712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.537745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.537877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.537911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.538063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.538097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.538260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.538294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.538478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.538512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.538673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.538706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.538881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.654 [2024-07-16 01:18:19.538914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.654 qpair failed and we were unable to recover it. 00:25:03.654 [2024-07-16 01:18:19.539081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.539116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.539303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.539338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.539474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.539507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.539625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.539660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.539814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.539847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.539992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.540026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.540168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.540202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.540383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.540416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.540544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.540578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.540699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.540734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.540873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.540908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.541046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.541080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.541230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.541263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.541389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.541423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.541571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.541605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.541749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.541782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.541935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.541980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.542127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.542162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.542293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.542328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.542535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.542569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.542697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.542731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.542889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.542922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.543049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.543082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.543220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.543253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.543413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.543446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.543603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.543636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.543805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.543839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.543977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.544013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.544144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.544178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.544302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.544335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.655 qpair failed and we were unable to recover it. 00:25:03.655 [2024-07-16 01:18:19.544486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.655 [2024-07-16 01:18:19.544520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.544701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.544764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.545077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.545143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.545469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.545551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.545798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.545862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.546195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.546260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.546577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.546641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.546902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.546942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.547093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.547128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.547399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.547462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.547782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.547845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.548107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.548172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.548444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.548478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.548606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.548658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.549024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.549090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.549399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.549463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.549767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.549831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.550140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.550208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.550501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.550535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.550690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.550723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.550936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.550980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.551144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.551224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.551512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.551580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.551839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.551904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.552224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.552290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.552531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.552594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.552859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.552945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.553264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.553330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.656 qpair failed and we were unable to recover it. 00:25:03.656 [2024-07-16 01:18:19.553619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.656 [2024-07-16 01:18:19.553653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.553802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.553882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.554186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.554252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.554575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.554641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.554937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.555022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.555344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.555409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.555697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.555772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.556063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.556132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.556409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.556474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.556777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.556841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.557122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.557188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.557500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.557569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.557913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.557997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.558320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.558384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.558653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.558716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.559029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.559113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.559401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.559466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.559745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.559809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.560105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.560171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.560418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.560482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.560770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.560837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.561185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.561253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.561504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.561568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.561888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.561951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.562301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.562373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.562622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.562690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.562986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.563021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.563198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.563232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.563544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.563608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.657 [2024-07-16 01:18:19.563899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.657 [2024-07-16 01:18:19.563986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.657 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.564322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.564387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.564699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.564763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.565036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.565102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.565377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.565453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.565818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.565884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.566185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.566252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.566521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.566584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.566912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.567006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.567335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.567402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.567688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.567752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.568040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.568105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.568383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.568449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.568765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.568832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.569100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.569166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.569498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.569561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.569844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.569908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.570192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.570271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.570608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.570673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.570993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.571059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.571290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.571355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.571609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.571644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.571804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.571839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.572115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.572180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.572440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.572505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.572780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.572844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.573129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.573209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.573504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.573570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.658 [2024-07-16 01:18:19.573895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.658 [2024-07-16 01:18:19.573981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.658 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.574292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.574325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.574480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.574513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.574725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.574808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.575112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.575147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.575375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.575440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.575718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.575785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.576066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.576134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.576373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.576438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.576739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.576803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.577085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.577152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.577474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.577541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.577856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.577921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.578232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.578298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.578546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.578618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.578896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.578931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.579120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.579186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.579499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.579573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.579849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.579913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.580227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.580309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.580631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.580695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.581012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.581078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.581362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.581428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.659 qpair failed and we were unable to recover it. 00:25:03.659 [2024-07-16 01:18:19.581734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.659 [2024-07-16 01:18:19.581801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.582112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.582179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.582457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.582521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.582791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.582824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.582975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.583039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.583315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.583396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.583678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.583712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.583865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.583933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.584231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.584295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.584605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.584669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.584992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.585060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.585343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.585407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.585712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.585776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.586024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.586092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.586384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.586461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.586742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.586809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.587083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.587148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.587420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.587485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.587736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.587801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.588108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.588143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.588375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.588441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.588723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.588797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.589113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.589178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.589465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.589545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.589862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.589929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.590268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.590334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.590619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.590682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.590943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.591029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.591325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.660 [2024-07-16 01:18:19.591391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.660 qpair failed and we were unable to recover it. 00:25:03.660 [2024-07-16 01:18:19.591712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.591775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.592075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.592164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.592458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.592522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.592839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.592905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.593288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.593385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.593744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.593831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.594178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.594219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.594436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.594511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.594891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.594986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.595378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.595452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.595797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.595871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.596239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.596314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.596628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.596667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.596826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.596864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.597083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.597158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.597524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.597599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.597972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.598047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.598413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.598487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.598843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.598916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.599264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.599351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.599698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.599772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.600146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.600219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.600561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.600635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.601001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.601078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.601451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.601524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.601853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.601926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.602318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.602392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.602712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.602793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.603149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.603223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.661 [2024-07-16 01:18:19.603581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.661 [2024-07-16 01:18:19.603654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.661 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.603949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.604044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.604430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.604503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.604840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.604920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.605331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.605406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.605785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.605858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.606218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.606291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.606658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.606732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.607071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.607148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.662 [2024-07-16 01:18:19.607499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.662 [2024-07-16 01:18:19.607576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.662 qpair failed and we were unable to recover it. 00:25:03.932 [2024-07-16 01:18:19.607947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.932 [2024-07-16 01:18:19.608039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.932 qpair failed and we were unable to recover it. 00:25:03.932 [2024-07-16 01:18:19.608386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.932 [2024-07-16 01:18:19.608424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.932 qpair failed and we were unable to recover it. 00:25:03.932 [2024-07-16 01:18:19.608642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.932 [2024-07-16 01:18:19.608717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.609067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.609142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.609511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.609584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.609918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.610015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.610372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.610446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.610835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.610910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.611289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.611363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.611664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.611742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.612077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.612152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.612465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.612540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.612881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.612971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.613313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.613385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.613722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.613797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.614183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.614259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.614581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.614620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.614819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.614892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.615222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.615295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.615638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.615711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.616073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.616161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.616495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.616569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.616936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.617028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.617407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.617481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.617812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.617885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.618294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.618368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.618704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.618778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.933 [2024-07-16 01:18:19.619148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.933 [2024-07-16 01:18:19.619223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.933 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.619607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.619681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.620039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.620078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.620294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.620376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.620737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.620810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.621188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.621261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.621604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.621679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.622055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.622130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.622476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.622550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.622908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.622996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.623365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.623439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.623781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.623853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.624221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.624296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.624635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.624707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.625044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.625118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.625496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.625569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.625910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.626008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.626360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.626434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.626764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.626837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.627196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.627270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.627654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.627728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.628093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.628168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.628543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.628616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.628973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.629048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.629398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.629472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.629834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.629906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.630279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.630371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.934 qpair failed and we were unable to recover it. 00:25:03.934 [2024-07-16 01:18:19.630726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.934 [2024-07-16 01:18:19.630816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.631221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.631303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.631635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.631723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.632119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.632193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.632524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.632597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.632926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.633011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.633348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.633431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.633772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.633846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.634242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.634317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.634687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.634761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.635128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.635202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.635577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.635649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.635995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.636070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.636435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.636509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.636841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.636914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.637313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.637385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.637717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.637790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.638131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.638207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.638522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.638607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.638983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.639059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.639414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.639487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.639852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.639927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.640281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.640355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.640701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.640773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.641075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.641150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.641488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.641561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.641934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.935 [2024-07-16 01:18:19.642038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.935 qpair failed and we were unable to recover it. 00:25:03.935 [2024-07-16 01:18:19.642379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.642452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.642815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.642888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.643281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.643355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.643721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.643793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.644175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.644248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.644585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.644658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.645059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.645136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.645498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.645571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.645937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.646027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.646330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.646404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.646747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.646821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.647159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.647234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.647598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.647671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.648038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.648112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.648489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.648563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.648941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.649031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.649374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.649449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.649808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.649882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.650235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.650310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.650686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.650770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.651076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.651152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.651518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.651590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.651926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.652013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.652354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.652427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.652776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.652848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.653212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.653288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.936 qpair failed and we were unable to recover it. 00:25:03.936 [2024-07-16 01:18:19.653657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.936 [2024-07-16 01:18:19.653730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.654099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.654172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.654515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.654588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.654977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.655055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.655387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.655461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.655840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.655912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.656310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.656384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.656740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.656813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.657175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.657250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.657599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.657672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.658023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.658097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.658472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.658545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.658885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.658972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.659330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.659405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.659743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.659819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.660176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.660251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.660592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.660667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.660998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.661073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.661435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.661509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.661859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.661933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.662351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.662426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.662757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.662830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.663155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.663231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.663568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.663643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.664015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.664090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.664452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.664526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.664888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.664975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.665367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.665439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.665771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.665844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.666236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.937 [2024-07-16 01:18:19.666308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.937 qpair failed and we were unable to recover it. 00:25:03.937 [2024-07-16 01:18:19.666674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.666747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.667096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.667171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.667538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.667611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.667973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.668068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.668405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.668480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.668823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.668899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.669226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.669300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.669594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.669668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.670037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.670111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.670452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.670526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.670890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.670978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.671325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.671398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.671761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.671834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.672221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.672295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.672675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.672749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.673095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.673168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.673539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.673612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.674012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.674088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.674458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.674531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.674874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.674949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.675335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.675409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.675779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.675853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.676211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.676286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.676655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.676728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.677065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.677141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.677478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.677551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.677923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.678032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.678399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.678471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.678813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.678888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.679274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.679348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.679727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.679800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.938 [2024-07-16 01:18:19.680159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-07-16 01:18:19.680233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.938 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.680575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.680650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.680995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.681070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.681441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.681515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.681856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.681931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.682316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.682390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.682727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.682801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.683200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.683275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.683597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.683672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.684049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.684124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.684506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.684579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.684949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.685046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.685385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.685470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.685840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.685914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.686263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.686336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.686708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.686781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.687102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.687177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.687547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.687620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.687974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.688048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.688423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.688495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.688878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.688952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.689340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.689415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.689721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.689794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.690145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.690220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.690550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.690623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.690996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.691071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.691458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.691530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.691882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.691970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.692354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.692426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.692791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.692865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.693236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.693311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.693677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.693749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.694129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.694203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.694534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.694606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.694996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.695071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.695409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.695483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.695851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-07-16 01:18:19.695923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.939 qpair failed and we were unable to recover it. 00:25:03.939 [2024-07-16 01:18:19.696322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.696397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.696731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.696809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.697217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.697291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.697659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.697733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.698101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.698176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.698549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.698621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.698945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.699047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.699414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.699487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.699857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.699930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.700293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.700366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.700751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.700824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.701225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.701298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.701647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.701720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.702059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.702134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.702509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.702582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.702893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.702997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.703374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.703446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.703788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.703864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.704239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.704316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.704676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.704750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.705081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.705155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.705490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.705564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.705938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.706039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.706389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.706463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.706798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.706870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.707272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.707345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.707712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.940 [2024-07-16 01:18:19.707784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.940 qpair failed and we were unable to recover it. 00:25:03.940 [2024-07-16 01:18:19.708155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.708229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.708562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.708634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.709028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.709101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.709416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.709492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.709815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.709890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.710277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.710352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.710727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.710799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.711153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.711226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.711598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.711671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.712048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.712123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.712454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.712527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.712896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.712982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.713366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.713439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.713777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.713850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.714251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.714326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.714710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.714783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.715157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.715230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.715567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.715641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.716016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.716090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.716437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.716510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.716887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.716973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.717350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.717423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.717788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.717862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.718185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.718262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.718637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.718710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.941 qpair failed and we were unable to recover it. 00:25:03.941 [2024-07-16 01:18:19.719074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.941 [2024-07-16 01:18:19.719144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.719433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.719496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.719821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.719888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.720187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.720268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.720599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.720664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.720934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.721020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.721335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.721401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.721675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.721742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.722033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.722102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.722410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.722477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.722763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.722828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.723096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.723163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.723446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.723514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.723837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.723901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.724198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.724263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.724533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.724600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.724911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.724991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.725329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.725396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.725662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.725728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.726010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.726077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.726404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.726470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.726793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.726859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.727128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.727197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.727505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.727572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.727882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.727947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.728246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.728311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.728620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.728686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.728983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.729050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.729291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.729358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.729688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.729753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.730078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.730148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.730385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.730450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.730732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.730798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.731112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.942 [2024-07-16 01:18:19.731181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.942 qpair failed and we were unable to recover it. 00:25:03.942 [2024-07-16 01:18:19.731489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.731555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.731862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.731929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.732214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.732281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.732536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.732604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.732934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.733017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.733280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.733347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.733622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.733689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.734009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.734076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.734409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.734474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.734794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.734870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.735205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.735271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.735549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.735617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.735897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.735977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.736260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.736327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.736617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.736682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.736972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.737041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.737316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.737382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.737693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.737760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.738049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.738118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.738397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.738462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.738789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.738854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.739200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.739266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.739578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.739645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.739947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.740030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.740318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.740386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.740696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.740763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.741086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.741154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.741448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.741514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.741848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.741913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.742244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.742311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.742584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.742649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.742981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.743047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.743388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.743430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.743640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.743681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.743828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.743870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.744054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.744097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.744263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.744305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.744485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.744526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.744769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.744834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.745148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.943 [2024-07-16 01:18:19.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.943 qpair failed and we were unable to recover it. 00:25:03.943 [2024-07-16 01:18:19.745333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.745375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.745557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.745620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.745909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.745952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.746218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.746283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.746612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.746677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.746970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.747037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.747357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.747423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.747743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.747807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.748115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.748182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.748443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.748524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.748742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.748806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.749098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.749166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.749452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.749517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.749764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.749805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.750043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.750111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.750351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.750418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.750696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.750761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.751068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.751135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.751445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.751510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.751780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.751845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.752167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.752210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.752422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.752498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.752782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.752824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.753000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.753072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.753399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.753465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.753746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.753815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.754109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.754175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.754455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.754521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.754826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.754891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.755217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.755283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.755599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.755666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.755983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.756050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.756321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.756389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.756674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.756740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.757012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.757080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.757399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.757465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.757758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.757823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.758101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.758170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.758451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.758517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.758786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.758851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.759112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.759157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.944 [2024-07-16 01:18:19.759335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.944 [2024-07-16 01:18:19.759413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.944 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.759695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.759762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.760047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.760117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.760433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.760499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.760754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.760819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.761144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.761213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.761473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.761516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.761766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.761832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.762105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.762155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.762351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.762425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.762693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.762736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.762924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.763001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.763258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.763324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.763616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.763682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.763996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.764062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.764356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.764422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.764744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.764809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.765058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.765125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.765418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.765485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.765755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.765820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.766050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.766117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.766379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.766444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.766758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.766823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.767101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.767167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.767479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.767544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.767777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.767844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.768169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.768236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.768550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.768618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.768941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.769028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.769309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.769376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.769693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.769758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.770018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.770086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.770382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.770448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.770724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.770790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.771098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.771166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.771419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.771484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.771768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.771832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.772109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.772176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.772475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.772541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.772784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.772850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.773174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.773240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.773512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.773577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.773857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.773922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.774254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.774321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.774561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.774627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.774940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.775024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.775342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.775407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.775696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.775761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.776068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.776146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.776470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.776535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.776857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.776922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.945 qpair failed and we were unable to recover it. 00:25:03.945 [2024-07-16 01:18:19.777260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.945 [2024-07-16 01:18:19.777325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.777640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.777705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.778009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.778076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.778340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.778406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.778677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.778742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.779073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.779140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.779458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.779524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.779841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.779906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.780232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.780298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.780584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.780649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.780986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.781052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.781349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.781414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.781703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.781769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.782044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.782113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.782412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.782478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.782764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.782830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.783135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.783203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.783523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.783588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.783861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.783926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.784251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.784317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.784639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.784704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.785028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.785095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.785361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.785427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.785705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.785747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.785986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.786062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.786373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.786415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.786620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.786686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.786996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.787064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.787336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.787401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.787708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.787773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.788088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.788154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.788465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.788532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.788846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.788911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.789210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.789276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.789578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.789643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.789881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.789948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.790226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.790292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.790552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.790617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.790907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.790993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.791318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.791361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.791497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.791539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.791776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.791842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.792125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.792192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.792499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.946 [2024-07-16 01:18:19.792565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.946 qpair failed and we were unable to recover it. 00:25:03.946 [2024-07-16 01:18:19.792843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.792908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.793253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.793319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.793634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.793700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.794014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.794083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.794395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.794460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.794709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.794774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.795078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.795145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.795468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.795533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.795847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.795913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.796237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.796303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.796637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.796702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.796990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.797060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.797310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.797377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.797655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.797720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.798037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.798105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.798402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.798445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.798675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.798741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.799002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.799069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.799380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.799448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.799679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.799746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.800025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.800123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.800402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.800468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.800711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.800778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.801091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.801159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.801443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.801510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.801816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.801858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.802044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.802108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.802408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.802450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.802602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.802646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.802828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.802892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.803222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.803289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.803610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.803676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.804001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.804067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.804376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.804442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.804774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.804840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.805104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.805147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.805379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.805446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.805765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.805830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.806101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.806145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.806396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.806461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.806742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.806808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.947 [2024-07-16 01:18:19.807091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.947 [2024-07-16 01:18:19.807158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.947 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.807447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.807512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.807836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.807901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.808169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.808238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.808502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.808568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.808878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.808920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.809207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.809273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.809513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.809580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.809903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.810002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.810325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.810390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.810704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.810768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.811067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.811111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.811267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.811310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.811492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.811555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.811821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.811887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.812210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.812275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.812512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.812578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.812875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.812917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.813114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.813190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.813462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.813537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.813816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.813884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.814219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.814262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.814461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.814527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.814798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.814865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.815197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.815264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.815585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.815649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.815944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.816031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.816308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.816373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.816616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.816682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.816884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.816919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.817103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.817138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.817355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.817397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.817608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.817650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.817854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.817889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.818045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.818081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.818240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.818313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.818544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.818612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.818928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.819027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.819215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.819251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.819383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.819418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.819550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.819585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.819828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.819894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.820128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.820164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.820334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-07-16 01:18:19.820369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.948 qpair failed and we were unable to recover it. 00:25:03.948 [2024-07-16 01:18:19.820494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.820527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.820706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.820741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.820881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.820916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.821062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.821097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.821223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.821258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.821415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.821450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.821609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.821644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.821768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.821801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.821925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.821969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.822102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.822138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.822265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.822301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.822484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.822519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.822733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.822769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.823012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.823048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.823183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.823217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.823354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.823395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.823529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.823563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.823720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.823756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.823913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.823947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.824139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.824174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.824316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.824351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.824534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.824569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.824714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.824748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.824895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.824930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.825073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.825108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.825253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.825287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.825478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.825512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.825673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.825708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.825901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.825936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.826078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.826111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.826246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.826281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.826437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.826471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.826628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.826663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.826810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.826875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.827119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.827154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.827286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.827321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.827470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.827505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.827668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.827702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.827831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.827867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.828008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.828044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.828206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.828240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.828397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.828432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.828588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.828623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.828806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.828840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.828998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.829033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.829166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.829200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.829328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.829363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.829511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.829545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.829667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.829701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.829885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.829919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.830054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.949 [2024-07-16 01:18:19.830088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.949 qpair failed and we were unable to recover it. 00:25:03.949 [2024-07-16 01:18:19.830211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.830245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.830403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.830438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.830585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.830620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.830856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.830920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.831171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.831212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.831346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.831381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.831592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.831657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.831893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.831928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.832067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.832100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.832227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.832262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.832396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.832429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.832577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.832611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.832757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.832792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.832980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.833016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.833147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.833179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.833301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.833336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.833498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.833532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.833660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.833693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.833858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.833894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.834036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.834070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.834202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.834237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.834369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.834402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.834586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.834621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.834741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.834773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.834923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.834965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.835112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.835147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.835306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.835341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.835468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.835503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.835819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.835853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.835984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.836018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.836130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.836163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.836325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.836360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.836539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.836574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.836835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.836870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.837028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.837065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.837192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.837227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.837554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.837634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.837851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.837887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.838032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.838069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.838210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.838246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.838378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.838413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.838550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.838584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.838747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.838812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.839079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.839114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.839263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.839303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.839462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.950 [2024-07-16 01:18:19.839524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.950 qpair failed and we were unable to recover it. 00:25:03.950 [2024-07-16 01:18:19.839699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.839734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.839969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.840010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.840145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.840179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.840333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.840368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.840566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.840626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.840883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.840917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.841103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.841138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.841277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.841312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.841490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.841526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.841826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.841888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.842084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.842120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.842253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.842289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.842429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.842464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.842603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.842639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.842759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.842793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.842937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.842983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.843128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.843162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.843308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.843343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.843470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.843503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.843637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.843674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.843833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.843870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.844006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.844042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.844170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.844205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.844360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.844394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.844542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.844576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.844742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.844777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.844915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.844950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.845106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.845141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.845267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.845304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.845464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.845500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.845658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.845693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.845874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.845909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.846054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.846089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.846221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.846256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.846405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.846440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.846553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.846586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.846744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.846779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.846897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.846929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.847084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.847124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.847247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.847282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.951 qpair failed and we were unable to recover it. 00:25:03.951 [2024-07-16 01:18:19.847428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.951 [2024-07-16 01:18:19.847461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.847647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.847681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.847806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.847840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.847972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.848014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.848132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.848166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.848355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.848388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.848538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.848573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.848752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.848787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.848903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.848938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.849077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.849113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.849253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.849288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.849426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.849460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.849638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.849672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.849874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.849929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.850111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.850146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.850265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.850300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.850452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.850487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.850695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.850729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.850908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.850941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.851110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.851144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.851317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.851353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.851601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.851636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.851819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.851860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.852053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.852088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.852243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.852278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.852425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.852460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.852695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.852751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.853019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.853054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.853210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.853244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.853515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.853587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.853795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.853832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.854086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.854121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.854284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.854317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.854473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.854508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.854682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.854740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.855003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.855038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.855172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.855206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.855390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.855424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.855578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.855625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.855760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.855795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.856037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.856072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.856192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.856236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.856396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.856432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.856614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.856649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.856772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.856805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.856974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.857021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.857154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.857188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.857340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.857376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.857488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.952 [2024-07-16 01:18:19.857523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.952 qpair failed and we were unable to recover it. 00:25:03.952 [2024-07-16 01:18:19.857685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.857737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.857981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.858028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.858158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.858194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.858327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.858362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.858543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.858578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.858712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.858746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.858893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.858927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.859096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.859132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.859283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.859318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.859477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.859511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.859738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.859794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.860035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.860070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.860211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.860262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.860524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.860558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.860718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.860786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.861050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.861086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.861358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.861459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.861793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.861848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.862057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.862095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.862258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.862293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.862427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.862461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.862713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.862779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.863057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.863093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.863225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.863259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.863396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.863431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.863725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.863784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.864033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.864068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.864200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.864242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.864395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.864444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.864690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.864733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.864916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.864950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.865118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.865152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.865297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.865333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.865468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.865504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.865696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.865747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.866003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.866038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.866191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.866225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.866375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.866440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.866734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.866786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.867016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.867050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.867168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.867203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.867383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.867417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.867583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.867639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.867882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.867917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.868059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.868095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.868259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.953 [2024-07-16 01:18:19.868303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.953 qpair failed and we were unable to recover it. 00:25:03.953 [2024-07-16 01:18:19.868525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.868596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.868850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.868904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.869096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.869132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.869314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.869350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.869544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.869578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.869735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.869769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.869995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.870031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.870187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.870230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.870504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.870557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.870754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.870808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.871037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.871072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.871201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.871236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.871396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.871430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.871610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.871644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.871803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.871838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.872069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.872103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.872228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.872291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.872529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.872564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.872744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.872777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.872988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.873182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.873216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.873398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.873432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.873562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.873596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.873777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.873811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.874009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.874045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.874202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.874236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.874395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.874446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.874698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.874751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.874988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.875047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.875277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.875311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.875469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.875520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.875648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.875682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.875866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.875898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.876045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.876079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.876235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.876292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.876579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.876611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.876756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.876789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.877008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.877040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.877195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.877242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.877419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.877451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.877632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.877684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.877896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.877929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.878064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.878097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.879570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.879613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.879750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.879778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.879934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.879970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.880739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.880769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.880901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.880934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.881114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.881162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.881344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.881397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.881650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.881709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.954 qpair failed and we were unable to recover it. 00:25:03.954 [2024-07-16 01:18:19.881817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.954 [2024-07-16 01:18:19.881843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.881990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.882028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.882186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.882236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.883033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.883063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.883323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.883378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.883566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.883615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.883723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.883749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.883901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.883927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.884059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.884104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.884287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.884334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.884451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.884478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.884577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.884604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.884745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.884772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.884935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.884968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.885068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.885095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.885189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.885216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.885319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.885347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.885471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.885498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.885593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.885619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.885754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.885791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.885952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.885988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.886113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.886140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.886248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.886275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.886423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.886450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.886550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.886577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.887317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.887346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.887513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.887540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.887635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.887663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.888550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.888591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.888752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.888780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.888931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.888966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.889083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.889110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.889204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.889234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.889352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.889379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.889560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.889587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.889698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.889723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.889860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.889887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.889979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.890016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.890168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.890195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.890320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.890352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.890462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.890489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.890585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.890612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.890710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.890737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.955 [2024-07-16 01:18:19.890857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.955 [2024-07-16 01:18:19.890884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.955 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.891012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.891040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.891765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.891797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.891972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.892003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.892112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.892139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.893046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.893075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.893207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.893233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.893377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.893404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.893542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.893569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.893678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.893704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.893858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.893886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.894033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.894082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.894203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.894230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.894355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.894382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.894535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.894561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.894691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.894717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.894873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.894899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.895059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.895107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.895256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.895291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.895518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.895554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.895667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.895695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.896439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.896469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.896618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.896645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.896786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.896813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.897710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.897763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.898017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.898054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.898229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.898275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.898385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.898412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.898539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.898565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.898705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.898732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.898836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.898863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.898981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.899014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.899146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.899174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.899328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.899376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.899503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.899535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.899663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.899690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.899823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.899855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.900028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.900077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.900205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.900239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.900397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.900424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.900550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.900578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.900684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.900710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.900837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.900865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.901018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.901070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.901196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.901223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.901343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.901371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.901523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.901551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.901673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.901701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.901827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.901854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.901977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.902013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.956 [2024-07-16 01:18:19.902120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.956 [2024-07-16 01:18:19.902147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.956 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.902269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.902295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.902422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.902449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.902552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.902579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.902698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.902724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.902827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.902854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.902986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.903023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.903129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.903157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.903285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.903313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.903438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.903466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.903619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.903646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.903750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.903777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.903906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.903933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.904057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.904084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.904189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.904226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.904327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.904353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.904460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.904485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.904612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.904639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.904762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.904789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.904889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.904914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.905031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.905057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.905156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.905182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.905293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.905320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.905445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.905472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.905570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.905596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.905725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.905752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.905872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.905903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.906032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.906061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.906177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.906228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.906413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.906448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.906616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.906644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.906771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.906799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.906926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.906953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.907085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.907137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.907245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.907272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.907373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.907399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.907492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.907518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.907621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.907646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.907771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.907799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.907923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.907950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.908084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.908111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.908211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.908238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.908349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.908378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.908468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.908495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.908590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.908616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.908740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.908767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.908879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.908907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.909049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.909091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.909208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.909236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.909375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.909403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.909502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.909529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.957 [2024-07-16 01:18:19.909641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.957 [2024-07-16 01:18:19.909669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.957 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.909822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.909850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.909966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.910011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.910143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.910177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.910360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.910395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.910523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.910557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.910725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.910760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.910912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.910947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.911072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.911099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.911204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.911231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.911365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.911399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.911561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.911597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.911734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.911785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.911907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.911941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.912088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.912116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.912209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.912256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.912423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.912449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.912591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.912618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.912717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.912744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.912848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.912876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.912989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.913017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.913128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.913174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.913307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.913341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.913472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.913507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.913668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.913702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.913868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.913903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.914061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.914090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.914186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.914213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:03.958 [2024-07-16 01:18:19.914361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.958 [2024-07-16 01:18:19.914397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:03.958 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.914572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.914606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.914729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.914763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.914887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.914923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.915064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.915092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.915194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.915221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.915402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.915437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.915605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.915640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.915780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.915815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.915947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.915979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.916086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.916114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.916230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.916257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.916377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.916404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.916559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.916594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.916752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.916787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.916951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.917019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.917120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.917148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.917300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.917343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.917508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.917560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.917679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.917727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.917823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.917851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.918025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.918075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.918179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.918215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.918341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.918368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.918523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.918550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.918672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.918698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.918821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.918847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.918949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.919001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.919110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.919136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.919232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.919259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.919394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.236 [2024-07-16 01:18:19.919441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.236 qpair failed and we were unable to recover it. 00:25:04.236 [2024-07-16 01:18:19.919547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.919574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.919674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.919700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.919849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.919875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.919973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.920005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.920107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.920134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.920235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.920261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.920390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.920419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.920525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.920563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.920668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.920695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.920801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.920828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.920951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.921001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.921130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.921164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.921296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.921343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.921509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.921558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.921727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.921754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.921876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.921903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.922006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.922036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.922133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.922161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.922340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.922368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.922495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.922522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.922645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.922681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.922851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.922878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.922998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.923026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.923129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.923156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.923299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.923335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.923490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.923525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.923692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.923726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.923851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.923885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.924033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.924060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.924207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.924248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.924400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.924436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.924600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.924627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.924772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.924808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.924978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.925026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.925127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.925154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.925304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.925339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.925521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.925562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.925704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.925732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.925895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.925922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.926059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.926086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-16 01:18:19.926185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-16 01:18:19.926223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.926378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.926413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.926573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.926609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.926762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.926797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.926932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.926967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.927072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.927100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.927207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.927267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.927386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.927421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.927572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.927607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.927763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.927798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.927927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.927972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.928144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.928172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.928281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.928309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.928464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.928515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.928647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.928682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.928865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.928901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.929057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.929086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.929192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.929248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.929424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.929460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.929618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.929654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.929807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.929843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.929987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.930023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.930134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.930161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.930288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.930330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.930504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.930552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.930698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.930747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.930875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.930903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.931010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.931036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.931192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.931244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.931419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.931467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.931645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.931693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.931814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.931842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.931967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.931995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.932096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.932122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.932254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.932281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.932405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.932431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.932530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.932562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.932660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.932687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.932779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.932810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.932912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.932939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.933086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.933113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-16 01:18:19.933216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-16 01:18:19.933246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.933371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.933398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.933498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.933525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.933646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.933673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.933770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.933798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.933903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.933929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.934048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.934074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.934193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.934224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.934320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.934346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.934509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.934537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.934662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.934689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.934811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.934838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.934940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.934976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.935110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.935163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.935330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.935383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.935518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.935546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.935646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.935674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.935796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.935835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.935942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.935978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.936118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.936242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.936370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.936497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.936623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.936766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.936898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.936997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.937023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.937128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.937154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.937289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.937316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.937437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.937464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.937571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.937597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.937716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.937742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.937871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.937899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.938025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.938073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.938181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.938207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.938344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.938375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.938496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.938523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.938629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.938656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.938802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.938829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.938930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.938977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.939098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.939140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-16 01:18:19.939252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-16 01:18:19.939281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.939384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.939411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.939536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.939563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.939686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.939713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.939866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.939893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.940020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.940054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.940168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.940201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.940397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.940432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.940576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.940612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.940774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.940810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.940967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.941024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.941132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.941159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.941294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.941329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.941533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.941568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.941704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.941731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.941920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.941947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.942087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.942114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.942217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.942263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.942392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.942425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.942620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.942655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.942837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.942872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.943055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.943083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.943194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.943221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.943378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.943415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.943634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.943669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.943795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.943830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.943952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.944018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.944117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.944144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.944254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.944297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.944433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.944470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.944605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.944649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.944827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.944861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.945018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.945046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-16 01:18:19.945139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-16 01:18:19.945166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.945261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.945314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.945453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.945488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.945667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.945701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.945866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.945897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.946041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.946089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.946213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.946246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.946408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.946444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.946609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.946644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.946754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.946789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.946913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.946940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.947079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.947106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.947206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.947235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.947391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.947426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.947581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.947616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.947745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.947795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.947971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.948023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.948121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.948148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.948252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.948279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.948435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.948469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.948616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.948664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.948789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.948824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.948985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.949018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.949121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.949149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.949260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.949287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.949401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.949436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.949614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.949649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.949871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.949899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.950029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.950065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.950169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.950196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.950359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.950386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.950490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.950518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.950683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.950719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.950899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.950934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.951069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.951096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.951206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.951233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.951386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.951414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.951585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.951629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.951782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.951816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.952024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.952051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.952147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-16 01:18:19.952173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-16 01:18:19.952321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.952376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.952565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.952614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.952824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.952858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.953005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.953034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.953140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.953167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.953288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.953315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.953432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.953459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.953663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.953698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.953882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.953915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.954065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.954093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.954197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.954232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.954370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.954402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.954563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.954598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.954779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.954814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.955065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.955093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.955195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.955232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.955335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.955363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.955461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.955488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.955623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.955650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.955824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.955852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.955950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.955983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.956120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.956152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.956307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.956339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.956457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.956489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.956616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.956650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.956798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.956833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.956979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.957014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.957159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.957209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.957415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.957452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.957612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.957646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.957858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.957892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.958030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.958063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.958176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.958207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.958367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.958399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.958628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.958663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.958812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.958863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.959046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.959079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.959197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.959253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.959443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.959477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.959695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.959751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.959923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.959964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-16 01:18:19.960117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-16 01:18:19.960149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.960301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.960333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.960459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.960493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.960643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.960678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.960830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.960864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.961009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.961164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.961195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.961418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.961452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.961588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.961622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.961801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.961836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.962006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.962042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.962194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.962230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.962403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.962438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.962677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.962717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.962876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.962922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.963108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.963142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.963262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.963294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.963456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.963490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.963649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.963684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.963829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.963863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.964011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.964040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.964151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.964180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.964347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.964378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.964619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.964665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.964807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.964835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.964947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.964983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.965088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.965116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.965253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.965288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.965496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.965530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.965696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.965732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.965900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.965932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.966083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.966127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.966289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.966320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.966425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.966473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.966606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.966640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.966851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.966883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.967048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.967078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.967189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.967227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.967352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.967381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.967529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.967560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.967741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.967778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-16 01:18:19.967899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-16 01:18:19.967932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.968097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.968126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.968226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.968271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.968469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.968502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.968647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.968697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.968860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.968907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.969051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.969080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.969192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.969220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.969396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.969428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.969570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.969602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.969772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.969803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.969967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.969997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.970114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.970142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.970316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.970362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.970482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.970511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.970661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.970703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.970842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.970870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.971010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.971041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.971156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.971184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.971374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.971402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.971552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.971585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.971756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.971788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.971908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.971939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.972081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.972111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.972222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.972252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.972408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.972439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.972598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.972629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.972858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.972891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.973047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.973076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.973439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.973474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.973653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.973685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.973877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.973910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.974085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.974116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.974228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.974256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.974366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.974396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-16 01:18:19.974534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-16 01:18:19.974566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.974692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.974720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.974925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.974967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.975095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.975124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.975234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.975266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.975378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.975417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.975566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.975602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.975746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.975777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.975951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.975989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.976097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.976126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.976228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.976276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.976449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.976482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.976626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.976667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.976824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.976856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.977007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.977037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.977146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.977175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.977313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.977341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.977540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.977573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.977739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.977769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.977904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.977936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.978075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.978104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.978212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.978242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.978368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.978399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.978543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.978574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.978747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.978780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.978928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.978967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.979105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.979133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.979240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.979268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.979387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.979418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.979568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.979601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.979714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.979746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.979897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.979928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.980077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.980106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.980217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.980263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.980445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.980476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.980639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.980667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.980846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.980879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.981020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.981050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.981160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.981188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.981344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.981374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.981524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.981557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-16 01:18:19.981714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-16 01:18:19.981747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.981902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.981936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.982111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.982141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.982237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.982281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.982389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.982550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.982582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.982730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.982763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.982896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.982926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.983084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.983113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.983221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.983266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.983415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.983447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.983575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.983604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.983720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.983750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.983922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.983951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.984079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.984108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.984244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.984273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.984385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.984442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.984593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.984636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.984759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.984792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.984938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.984974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.985093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.985121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.985290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.985323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.985481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.985510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.985661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.985693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.985844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.985877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.986058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.986091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.986211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.986252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.986403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.986435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.986581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.986613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.986786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.986819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.986942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.986981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.987130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.987161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.987317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.987348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.987472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.987506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.987630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.987662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.987781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.987813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.987971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.988017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.988142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.988173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.988329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.988361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.988510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.988543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.988690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.988722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-16 01:18:19.988896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-16 01:18:19.988927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.989059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.989103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.989207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.989239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.989348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.989376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.989491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.989519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.989671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.989697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.989808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.989834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.989941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.989975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.990138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.990164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.990298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.990326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.990485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.990512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.990638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.990665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.990797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.990825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.990988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.991027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.991159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.991185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.991338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.991366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.991473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.991499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.991648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.991679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.991826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.991859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.992027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.992054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.992158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.992185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.992347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.992378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.992512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.992543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.992764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.992792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.992951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.992990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.993134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.993160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.993325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.993357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.993504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.993537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.993692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.993725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.993881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.993912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.994063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.994089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.994225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.994270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.994453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.994483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.994591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.994622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.994768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.994802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.994960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.994992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.995138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.995328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.995360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.995484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.995517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.995673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.995707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.995859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.995892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-16 01:18:19.996035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-16 01:18:19.996062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.996168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.996199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.996358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.996389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.996549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.996581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.996740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.996768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.996930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.996968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.997120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.997147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.997348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.997402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.997584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.997615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.997779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.997810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.998018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.998045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.998177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.998204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.998393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.998426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.998590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.998639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.998815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.998847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.999016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.999044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.999137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.999164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.999315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.999342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.999475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.999520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.999686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.999719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:19.999870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:19.999903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.000056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.000084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.000178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.000206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.000337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.000364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.000509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.000540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.000706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.000747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.000938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.001046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.001202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.001254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.001464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.001505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.001639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.001671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.001825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.001857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.002026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.002054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.002151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.002177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.002342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.002373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.002499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.002531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.002654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.002685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.002838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.002865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.003021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.003049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.003200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.003231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.003409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.003439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.003586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-16 01:18:20.003636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-16 01:18:20.003827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.003853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.004012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.004039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.004167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.004192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.004320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.004363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.004521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.004556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.004770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.004797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.004944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.004977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.005088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.005114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.005238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.005286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.005404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.005451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.005654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.005701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.005879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.005906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.006003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.006029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.006181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.006212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.006384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.006432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.006595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.006624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.006760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.006789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.006921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.006948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.007089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.007115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.007261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.007287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.007392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.007428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.007577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.007609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.007732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.007763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.007922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.007974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.008134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.008169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.009242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.009283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.009421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.009459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.009591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.009648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.009812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.009849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.009993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.010031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.010180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.010232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.010380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.010423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.010564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.010590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.010692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.010717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.010819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.010845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.010974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-16 01:18:20.011000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-16 01:18:20.011127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.011153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.011255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.011282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.011432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.011459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.011604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.011645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.011783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.011812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.011969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.012004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.012104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.012132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.012255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.012282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.012410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.012437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.012588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.012615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.012717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.012744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.012898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.012926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.013080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.013106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.013254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.013287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.013430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.013461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.013654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.013695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.013878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.013904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.014011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.014037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.014170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.014200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.014377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.014409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.014554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.014608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.014864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.014921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.015052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.015078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.015183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.015209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.015365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.015436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.015703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.015770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.016025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.016051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.016177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.016203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.016328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.016354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.016529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.016584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.016858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.016884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.016985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.017010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.017139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.017168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.017271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.017297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.017454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.017505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.017651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.017682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.017830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.017878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.017982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.018009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.018137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.018165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.018297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.018322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-16 01:18:20.018517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-16 01:18:20.018580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.018853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.018920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.019149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.019175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.019274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.019300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.019420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.019450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.019589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.019618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.019801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.019853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.020039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.020065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.020197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.020224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.020353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.020379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.020597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.020659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.020883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.020913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.021042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.021068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.021216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.021248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.021434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.021468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.021674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.021721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.021846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.021878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.022032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.022060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.022168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.022194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.022322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.022352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.022479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.022505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.022605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.022632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.022781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.022837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.022976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.023019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.023149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.023175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.023328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.023362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.023519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.023554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.023865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.023920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.024126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.024158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.024342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.024377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.024558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.024635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.024868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.024933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.025129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.025161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.025390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.025422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.025562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.025593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.025851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.025916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.026129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.026160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.026321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.026351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.026466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.026495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.026652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.026701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.026851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.026883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-16 01:18:20.027041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.251 [2024-07-16 01:18:20.027072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.027245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.027277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.027415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.027446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.027596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.027627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.027772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.027820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.027947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.028006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.028184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.028216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.028443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.028473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.028612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.028641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.028821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.028875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.029052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.029083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.029231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.029262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.029444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.029510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.029778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.029810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.029986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.030018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.030146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.030177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.030398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.030458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.030714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.030746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.030940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.030983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.031112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.031146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.031361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.031391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.031507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.031536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.031763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.031818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.032082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.032117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.032323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.032387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.032635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.032692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.032963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.032997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.033201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.033233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.033382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.033415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.033540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.033572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.033800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.033833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.034023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.034054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.034185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.034215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.034382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.034415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.034541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.034573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.034726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.034755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.034917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.034947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.035122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.035155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.035305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.035339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.035575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.035627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.035854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.035905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.036124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.252 [2024-07-16 01:18:20.036159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.252 qpair failed and we were unable to recover it. 00:25:04.252 [2024-07-16 01:18:20.036360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.036408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.036724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.036765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.036926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.036998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.037153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.037203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.037441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.037501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.037746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.037794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.038007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.038058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.038262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.038311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.038505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.038537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.038664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.038697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.038876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.038925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.039069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.039103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.039291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.039332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.039559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.039604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.039773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.039819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.040021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.040069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.040278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.040324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.040556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.040602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.040848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.040895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.041137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.041183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.041440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.041473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.041647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.041694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.041905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.041935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.042087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.042118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.042274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.042307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.042489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.042544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.042785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.042849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.043111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.043145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.043294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.043327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.043531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.043576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.043781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.043858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.044156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.044210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.044465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.044497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.044658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.044690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.044899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.045015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.045239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.253 [2024-07-16 01:18:20.045274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.253 qpair failed and we were unable to recover it. 00:25:04.253 [2024-07-16 01:18:20.045431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.045496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.045809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.045873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.046153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.046186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.046318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.046351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.046563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.046595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.046744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.046776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.046983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.047057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.047249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.047430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.047462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.047768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.047797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.047939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.047976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.048185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.048236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.048424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.048454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.048557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.048586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.048717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.048746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.048914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.048944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.049113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.049159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.049369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.049446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.049755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.049818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.050115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.050150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.050311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.050344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.050500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.050550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.050802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.050841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.050998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.051057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.051314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.051378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.051643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.051678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.051862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.051920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.052183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.052224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.052433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.052474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.052718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.052782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.053065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.053113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.053364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.053397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.053588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.053619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.053742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.053772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.053909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.053942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.054097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.054130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.054319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.054353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.054612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.054646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.054778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.054812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.055026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.055062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.055217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.254 [2024-07-16 01:18:20.055251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.254 qpair failed and we were unable to recover it. 00:25:04.254 [2024-07-16 01:18:20.055470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.055542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.055817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.055895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.056180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.056226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.056430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.056464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.056666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.056745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.057027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.057062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.057189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.057222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.057405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.057439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.057697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.057757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.058051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.058086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.058243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.058277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.058428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.058462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.058576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.058609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.058734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.058768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.059000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.059034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.059169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.059221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.059396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.059441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.059676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.059736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.060043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.060090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.060290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.060336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.060544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.060584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.060763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.060804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.060974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.061016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.061202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.061243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.061540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.061618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.061871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.061931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.062234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.062269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.062421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.062455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.062657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.062703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.062935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.063026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.063215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.063249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.063410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.063471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.063783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.063860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.064205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.064284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.064579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.064656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.064943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.064986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.065125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.065160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.065490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.065566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.065859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.065937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.066301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.066380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.066693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.066770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.255 [2024-07-16 01:18:20.067058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.255 [2024-07-16 01:18:20.067120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.255 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.067439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.067520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.067749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.067782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.067969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.068032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.068308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.068342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.068582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.068660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.068910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.068981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.069243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.069277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.069441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.069481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.069774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.069851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.070138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.070185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.070365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.070399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.070557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.070591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.070771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.070824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.071138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.071217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.071512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.071590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.071821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.071855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.072036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.072071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.072370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.072415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.072612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.072657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.072945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.073017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.073296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.073330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.073515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.073549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.073709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.073779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.074052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.074129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.074428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.074508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.074793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.074827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.074985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.075040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.075227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.075268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.075474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.075544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.075823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.075857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.076020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.076055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.076266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.076300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.076429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.076463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.076622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.076655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.076902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.076940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.077079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.077113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.077241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.077275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.077526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.077586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.077807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.077868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.078212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.078290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.078567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.256 [2024-07-16 01:18:20.078608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.256 qpair failed and we were unable to recover it. 00:25:04.256 [2024-07-16 01:18:20.078814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.078855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.079092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.079171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.079454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.079532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.079823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.079883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.080232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.080309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.080584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.080618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.080777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.080841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.081167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.081245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.081562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.081638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.081918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.081953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.082127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.082182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.082367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.082433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.082729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.082807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.083137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.083217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.083522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.083557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.083682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.083716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.083872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.083905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.084069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.084137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.084459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.084536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.084826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.084886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.085135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.085176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.085434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.085511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.085825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.085903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.086167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.086245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.086536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.086569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.086727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.086779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.087005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.087048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.087300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.087335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.087494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.087528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.087654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.087687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.087839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.087872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.088103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.088179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.088503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.088580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.088828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.088888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.089183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.089260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.257 [2024-07-16 01:18:20.089523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.257 [2024-07-16 01:18:20.089557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.257 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.089709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.089743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.089873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.089907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.090161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.090238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.090551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.090635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.090888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.090947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.091296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.091373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.091658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.091699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.091843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.091884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.092088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.092130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.092422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.092505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.092782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.092858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.093162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.093203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.093395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.093436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.093624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.093685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.093930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.093993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.094182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.094223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.094517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.094593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.094817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.094876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.095187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.095229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.095486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.095565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.095852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.095911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.096193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.096234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.096441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.096483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.096732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.096809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.097127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.097205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.097529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.097615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.097860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.097919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.098244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.098321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.098605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.098682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.098984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.099044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.099329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.099405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.099708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.099786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.100041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.100102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.100388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.100464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.100782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.100859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.101166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.101208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.101398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.101439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.101624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.101665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.101851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.101892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.102229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.102318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.102607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.258 [2024-07-16 01:18:20.102648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.258 qpair failed and we were unable to recover it. 00:25:04.258 [2024-07-16 01:18:20.102877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.102936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.103276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.103353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.103628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.103669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.103852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.103900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.104105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.104147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.104325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.104366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.104607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.104684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.104984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.105045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.105299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.105377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.105619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.105697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.105951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.106045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.106364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.106450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.106747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.106808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.107073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.107115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.107308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.107350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.107494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.107536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.107794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.107871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.108213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.108291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.108603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.108680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.108939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.108989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.109157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.109198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.109538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.109615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.109902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.109986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.110284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.110344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.110655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.110733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.111042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.111104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.111370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.111432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.111706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.111747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.111997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.112080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.112404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.112482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.112767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.112808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.112995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.113038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.113266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.113343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.113666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.113745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.114075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.114136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.114434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.114511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.114791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.114870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.115191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.115270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.115587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.115673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.115982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.116042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.259 [2024-07-16 01:18:20.116331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.259 [2024-07-16 01:18:20.116409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.259 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.116688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.116768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.117058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.117119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.117445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.117522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.117842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.117918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.118189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.118249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.118567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.118644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.118865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.118907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.119082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.119123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.119450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.119526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.119826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.119887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.120175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.120255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.120548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.120625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.120861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.120920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.121195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.121273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.121563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.121641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.121919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.122005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.122302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.122378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.122697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.122773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.123038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.123100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.123400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.123477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.123756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.123833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.124119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.124181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.124436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.124513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.124783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.124861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.125114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.125192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.125522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.125603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.125854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.125915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.126181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.126259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.126553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.126616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.126912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.126989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.127307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.127383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.127702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.127780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.128070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.128132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.128457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.128534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.128814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.128891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.129150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.129227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.129491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.129569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.129837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.129897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.130210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.130289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.130622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.130699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.260 [2024-07-16 01:18:20.130976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.260 [2024-07-16 01:18:20.131037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.260 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.131291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.131368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.131695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.131772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.132032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.132093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.132413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.132491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.132768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.132847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.133124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.133185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.133511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.133588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.133854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.133914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.134255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.134334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.134610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.134688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.134939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.135008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.135340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.135418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.135705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.135766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.136063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.136124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.136440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.136518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.136798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.136876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.137211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.137288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.137556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.137633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.137923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.138008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.138345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.138421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.138738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.138815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.139093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.139154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.139438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.139514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.139830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.139907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.140218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.140305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.140585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.140662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.140949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.141024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.141298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.141357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.141641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.141718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.142005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.142066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.142358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.142419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.142712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.142790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.143013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.143073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.143327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.143405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.143692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.261 [2024-07-16 01:18:20.143769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.261 qpair failed and we were unable to recover it. 00:25:04.261 [2024-07-16 01:18:20.144059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.144119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.144444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.144530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.144808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.144886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.145191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.145268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.145541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.145617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.145907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.145983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.146311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.146391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.146716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.146793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.147089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.147150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.147432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.147507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.147833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.147910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.148257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.148341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.148659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.148736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.148995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.149058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.149332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.149409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.149726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.149802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.150081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.150151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.150483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.150563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.150852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.150928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.151239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.151317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.151613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.151677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.151987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.152048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.152313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.152373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.152658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.152734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.153032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.153093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.153381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.153457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.153775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.153851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.154140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.154200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.154492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.154569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.154889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.154976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.155245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.155307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.155598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.155676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.155935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.156012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.156302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.156361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.156680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.156757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.157065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.157126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.157391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.157468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.157749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.157828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.158153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.158231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.158505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.158581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.262 qpair failed and we were unable to recover it. 00:25:04.262 [2024-07-16 01:18:20.158884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.262 [2024-07-16 01:18:20.158945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.159253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.159331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.159650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.159726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.159996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.160056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.160363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.160440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.160759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.160836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.161099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.161160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.161458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.161537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.161789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.161865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.162149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.162226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.162471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.162549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.162840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.162900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.163209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.163287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.163564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.163641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.163899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.163969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.164305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.164381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.164704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.164789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.165095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.165155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.165445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.165522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.165852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.165929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.166215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.166278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.166572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.166650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.166913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.166986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.167287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.167363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.167677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.167754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.168050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.168112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.168426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.168504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.168827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.168904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.169203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.169281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.169607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.169692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.169946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.170022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.170344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.170421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.170682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.170759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.171045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.171109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.171392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.171471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.171740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.171818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.172112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.172171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.172450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.172527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.172839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.172917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.173251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.173334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.173608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.173684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.263 [2024-07-16 01:18:20.174001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.263 [2024-07-16 01:18:20.174061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.263 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.174353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.174430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.174756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.174832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.175126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.175196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.175464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.175541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.175860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.175936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.176213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.176280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.176583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.176644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.176909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.176980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.177258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.177337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.177614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.177690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.177980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.178041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.178273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.178332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.178618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.178695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.178946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.179024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.179279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.179338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.179594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.179670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.179941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.180018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.180322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.180381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.180714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.180791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.181043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.181105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.181388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.181464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.181787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.181864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.182143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.182203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.182520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.182597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.182850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.182911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.183215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.183294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.183613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.183689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.183986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.184047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.184361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.184445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.184745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.184832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.185138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.185199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.185526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.185610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.185869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.185928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.186232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.186310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.186526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.186588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.186846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.186908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.187256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.187333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.187624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.187701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.187974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.188035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.188282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.188341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.188651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.264 [2024-07-16 01:18:20.188735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.264 qpair failed and we were unable to recover it. 00:25:04.264 [2024-07-16 01:18:20.189033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.189094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.189356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.189434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.189763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.189841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.190133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.190195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.190435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.190513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.190852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.190929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.191207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.191267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.191552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.191628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.191894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.191966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.192299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.192377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.192674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.192736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.193035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.193097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.193415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.193493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.193810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.193887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.194123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.194184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.194508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.194597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.194899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.194970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.195230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.195290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.195529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.195605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.195871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.195930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.196194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.196254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.196583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.196663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.196915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.196991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.197219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.197280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.197515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.197593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.197846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.197905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.198214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.198293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.198611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.198688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.198988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.199049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.199308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.199368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.199689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.199767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.200070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.200130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.200445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.200523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.200796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.200872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.201150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.201210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.201540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.201624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.201924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.201995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.202318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.202395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.202726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.202802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.203066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.203127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.203400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.203477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.265 [2024-07-16 01:18:20.203794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.265 [2024-07-16 01:18:20.203870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.265 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.204138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.204199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.204529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.204606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.204876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.204935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.205230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.205308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.205626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.205703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.205973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.206033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.206321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.206380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.206696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.206772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.207070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.207132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.207418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.207495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.207817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.207895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.208202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.208263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.208583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.208660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.208969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.209030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.209304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.209364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.209648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.209727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.210022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.210057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.210222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.210257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.210436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.210473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.210620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.210654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.210970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.211032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.211341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.211427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.266 [2024-07-16 01:18:20.211741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.266 [2024-07-16 01:18:20.211820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.266 qpair failed and we were unable to recover it. 00:25:04.540 [2024-07-16 01:18:20.212092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.212154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.212484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.212561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.212879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.212982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.213281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.213340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.213625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.213702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.213984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.214044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.214334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.214393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.214670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.214749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.215048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.215109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.215386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.215463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.215776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.215853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.216127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.216187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.216449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.216527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.216840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.216918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.217220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.217297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.217612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.217689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.217905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.217987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.218313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.218390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.218710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.218796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.219102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.219163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.219501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.219579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.219836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.219896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.220266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.220337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.220641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.220724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.221102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.221183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.221539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.221628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.221939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.222031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.222341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.222414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.222769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.222849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.223151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.223227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.223539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.223632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.223937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.224017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.224345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.224433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.224769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.224846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.225102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.225165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.225508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.225606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.225863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.225927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.226293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.226387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.226716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.226793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.227114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.541 [2024-07-16 01:18:20.227177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.541 qpair failed and we were unable to recover it. 00:25:04.541 [2024-07-16 01:18:20.227478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.227570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.227874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.227934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.228286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.228367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.228662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.228742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.229051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.229132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.229389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.229485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.229781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.229861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.230195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.230282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.230607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.230674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.230976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.231043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.231393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.231472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.231797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.231898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.232238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.232305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.232633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.232712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.232986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.233062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.233333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.233402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.233732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.233810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.234054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.234119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.234477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.234558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.234898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.234991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.235265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.235333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.235664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.235741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.236085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.236148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.236427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.236505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.236841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.236918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.237278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.237358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.237612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.237706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.238003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.238064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.238395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.238482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.238774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.238861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.239158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.239218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.239496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.239575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.239876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.239936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.240249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.240315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.240604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.240692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.240995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.241057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.241356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.241435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.241780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.241862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.242146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.242209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.242553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.242632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.242894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.542 [2024-07-16 01:18:20.242981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.542 qpair failed and we were unable to recover it. 00:25:04.542 [2024-07-16 01:18:20.243336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.243413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.243760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.243839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.244134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.244196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.244538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.244616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.244854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.244928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.245258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.245336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.245669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.245749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.246064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.246129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.246406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.246485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.246747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.246826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.247086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.247183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.247485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.247562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.247818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.247881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.248226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.248331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.248666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.248744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.249086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.249165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.249490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.249569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.249857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.249917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.250226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.250317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.250657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.250736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.250986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.251046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.251372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.251452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.251770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.251846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.252113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.252176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.252501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.252588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.252892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.252982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.253305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.253381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.253666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.253743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.254036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.254098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.254381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.254460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.254754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.254834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.255080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.255141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.255385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.255480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.255803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.255879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.256169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.256249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.256501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.256577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.256850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.256909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.257235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.257329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.257692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.257775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.543 [2024-07-16 01:18:20.258060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.543 [2024-07-16 01:18:20.258121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.543 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.258407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.258483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.258769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.258854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.259149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.259237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.259540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.259616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.259908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.259995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.260334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.260412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.260777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.260856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.261166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.261227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.261511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.261594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.261880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.261939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.262313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.262394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.262726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.262812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.263052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.263116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.263424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.263500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.263837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.263914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.264226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.264286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.264545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.264606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.264816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.264875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.265180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.265258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.265559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.265640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.265945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.266017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.266336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.266414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.266708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.266785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.267042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.267111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.267439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.267516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.267833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.267909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.268161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.268241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.268567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.268644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.268925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.268995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.269283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.269371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.544 qpair failed and we were unable to recover it. 00:25:04.544 [2024-07-16 01:18:20.269672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.544 [2024-07-16 01:18:20.269748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.270048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.270110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.270400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.270496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.270795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.270874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.271173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.271255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.271586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.271664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.271978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.272038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.272356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.272432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.272713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.272790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.273065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.273127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.273453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.273534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.273778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.273855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.274158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.274218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.274466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.274546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.274826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.274886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.275198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.275276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.275558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.275644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.275949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.276021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.276315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.276392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.276707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.276788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.277046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.277108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.277383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.277460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.277797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.277877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.278154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.278232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.278520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.278605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.278826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.278888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.279173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.279261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.279593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.279672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.279979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.280039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.280377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.280473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.280818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.280896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.281174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.281234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.281545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.545 [2024-07-16 01:18:20.281622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.545 qpair failed and we were unable to recover it. 00:25:04.545 [2024-07-16 01:18:20.281910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.281985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.282247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.282305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.282592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.282675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.282929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.283005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.283288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.283347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.283660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.283737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.284039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.284100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.284428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.284511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.284833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.284921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.285147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.285206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.285527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.285613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.285880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.285940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.286235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.286295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.286541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.286618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.286901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.286973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.287232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.287292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.287593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.287664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.287982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.288043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.288303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.288363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.288620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.288697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.288991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.289053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.289347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.289433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.289760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.289841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.290135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.290195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.290520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.290607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.290856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.290916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.291230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.291298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.291571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.291647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.291911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.291982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.292284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.292365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.292654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.292731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.293055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.293134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.293400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.293476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.293790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.293878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.294196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.294256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.294536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.294623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.294900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.294973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.295244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.546 [2024-07-16 01:18:20.295304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.546 qpair failed and we were unable to recover it. 00:25:04.546 [2024-07-16 01:18:20.295589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.295666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.295946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.296016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.296249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.296309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.296632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.296709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.296975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.297035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.297319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.297378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.297706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.297784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.298053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.298114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.298441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.298518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.298806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.298882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.299196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.299266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.299563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.299638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.299929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.300001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.300278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.300347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.300600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.300675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.300925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.300998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.301292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.301374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.301683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.301760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.302006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.302069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.302367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.302448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.302748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.302809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.303139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.303219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.303542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.303628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.303895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.303966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.304301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.304379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.304695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.304770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.305031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.305092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.305426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.305511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.305845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.305931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.306273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.306352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.306674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.547 [2024-07-16 01:18:20.306751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.547 qpair failed and we were unable to recover it. 00:25:04.547 [2024-07-16 01:18:20.307077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.307139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.307430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.307507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.307825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.307911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.308224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.308310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.308629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.308714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.309005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.309065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.309341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.309417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.309742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.309822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.310105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.310165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.310473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.310569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.310849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.310908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.311222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.311299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.311535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.311611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.311874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.311933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.312247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.312309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.312595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.312657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.312922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.312999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.313330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.313408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.313739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.313826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.314087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.314148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.314456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.314534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.314776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.314855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.315189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.315276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.315608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.315696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.315985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.316046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.316333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.316418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.316745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.316821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.317084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.317144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.317398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.317475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.317790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.317876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.318222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.318311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.318610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.318688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.318992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.319060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.319393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.319471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.319804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.319881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.320154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.320214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.320510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.320598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.320910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.320980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.321249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.321308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.321543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.321619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.321820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.548 [2024-07-16 01:18:20.321880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.548 qpair failed and we were unable to recover it. 00:25:04.548 [2024-07-16 01:18:20.322153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.322234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.322554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.322640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.322904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.322980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.323307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.323392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.323722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.323809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.324066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.324127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.324410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.324778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.324854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.325186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.325263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.325537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.325614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.325909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.325977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.326263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.326343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.326680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.326766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.327080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.327140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.327469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.327546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.327837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.327899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.328242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.328335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.328580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.328657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.328912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.328989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.329299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.329376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.329664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.329741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.330003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.330065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.330339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.330416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.330742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.330820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.331070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.331133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.331420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.331506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.331841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.331918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.332210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.332296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.332584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.332644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.332938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.333020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.333353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.333442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.333760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.333838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.334101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.334162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.334457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.334542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.334875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.334952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.335283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.335361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.335684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.335769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.336064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.336125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.336451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.336529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.336847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.549 [2024-07-16 01:18:20.336924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.549 qpair failed and we were unable to recover it. 00:25:04.549 [2024-07-16 01:18:20.337253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.337341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.337675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.337752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.338058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.338129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.338471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.338548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.338865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.338942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.339261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.339338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.339660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.339741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.340033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.340093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.340380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.340467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.340745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.340821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.341081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.341144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.341461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.341538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.341863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.341939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.342234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.342328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.342655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.342733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.343043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.343108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.343386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.343463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.343735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.343811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.344071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.344131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.344414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.344501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.344794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.344855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.345167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.345257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.345544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.345623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.345899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.345993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.346315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.346397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.346716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.346793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.347063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.347125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.347415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.347498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.347823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.347899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.348167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.348244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.348530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.348607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.348868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.348927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.349254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.349333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.349658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.550 [2024-07-16 01:18:20.349747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.550 qpair failed and we were unable to recover it. 00:25:04.550 [2024-07-16 01:18:20.350076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.350136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.350420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.350507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.350796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.350875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.351163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.351555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.351632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.351924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.352007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.352306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.352382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.352655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.352732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.353042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.353122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.353395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.353471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.353755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.353832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.354082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.354142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.354410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.354487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.354762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.354839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.355149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.355226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.355505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.355582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.355848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.355917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.356214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.356297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.356629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.356714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.356974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.357036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.357304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.357362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.357577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.357655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.357939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.358012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.358280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.358339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.358638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.358717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.358989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.359050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.359335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.359424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.359769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.359846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.360150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.360209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.360543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.360628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.360930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.361003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.361302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.361371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.361698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.361776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.362087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.362155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.362478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.362556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.362873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.362948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.363272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.363333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.363662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.363745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.364069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.364129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.364424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.551 [2024-07-16 01:18:20.364500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.551 qpair failed and we were unable to recover it. 00:25:04.551 [2024-07-16 01:18:20.364742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.364820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.365111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.365198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.365545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.365621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.365883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.365944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.366297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.366357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.366627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.366703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.366989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.367050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.367345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.367423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.367710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.367786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.368050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.368110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.368390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.368466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.368792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.368870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.369166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.369250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.369579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.369657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.369926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.370011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.370324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.370401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.370694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.370776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.371046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.371115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.371445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.371530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.371847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.371922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.372266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.372344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.372594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.372672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.372922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.372997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.373269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.373347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.373627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.373704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.373973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.374034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.374331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.374400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.374721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.374798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.375099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.375169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.375459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.375535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.375852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.375935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.376171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.376230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.376513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.376589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.376885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.376944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.377248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.377308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.377600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.377678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.377930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.378016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.378334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.378410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.378678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.378755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.379076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.379138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.379429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.379506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.552 qpair failed and we were unable to recover it. 00:25:04.552 [2024-07-16 01:18:20.379831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.552 [2024-07-16 01:18:20.379908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.380215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.380293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.380576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.380662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.380950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.381040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.381368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.381447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.381763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.381840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.382109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.382169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.382404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.382482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.382778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.382861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.383174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.383234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.383550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.383626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.383926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.384000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.384342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.384427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.384754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.384841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.385092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.385153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.385474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.385550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.385802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.385879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.386194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.386274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.386522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.386599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.386825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.386887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.387190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.387270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.387594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.387671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.387981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.388041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.388333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.388418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.388753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.388832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.389101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.389162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.389492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.389578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.389908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.390012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.390285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.390344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.390598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.390660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.390950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.391043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.391370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.391446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.391742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.391819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.392128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.392188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.392484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.392565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.392899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.392991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.393225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.393287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.393609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.393693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.393985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.394045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.394314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.394373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.394693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.394771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.553 [2024-07-16 01:18:20.395004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.553 [2024-07-16 01:18:20.395066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.553 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.395401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.395487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.395810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.395900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.396179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.396239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.396525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.396612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.396913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.396984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.397264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.397324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.397652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.397733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.398008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.398070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.398392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.398471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.398792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.398868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.399181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.399248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.399552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.399638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.399895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.399967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.400270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.400329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.400559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.400635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.400916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.400986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.401293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.401353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.401604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.401680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.402008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.402070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.402360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.402419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.402748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.402839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.403135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.403196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.403522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.403598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.403878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.403969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.404251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.404310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.404633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.404720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.404984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.405044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.405361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.405443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.405722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.405798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.406042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.406103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.406312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.406380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.406638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.406724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.406985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.407045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.407382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.407467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.554 qpair failed and we were unable to recover it. 00:25:04.554 [2024-07-16 01:18:20.407739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.554 [2024-07-16 01:18:20.407814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.408107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.408166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.408489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.408565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.408851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.408909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.409209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.409286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.409551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.409627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.409914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.410002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.410264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.410340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.410589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.410666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.410914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.410990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.411284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.411368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.411689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.411766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.412057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.412118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.412443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.412519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.412817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.412897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.413156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.413234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.413548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.413626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.413883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.413942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.414279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.414360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.414671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.414748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.415077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.415138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.415470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.415547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.415878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.415985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.416282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.416359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.416679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.416766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.417067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.417128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.417456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.417533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.417762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.417838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.418119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.418206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.418528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.418604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.418894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.418953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.419294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.419374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.419702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.419790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.420093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.420153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.420430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.420506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.420839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.420915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.421281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.421362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.421652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.421713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.421945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.422020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.422292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.422370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.422668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.422744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.555 qpair failed and we were unable to recover it. 00:25:04.555 [2024-07-16 01:18:20.423079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.555 [2024-07-16 01:18:20.423150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.423459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.423535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.423816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.423901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.424255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.424343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.424665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.424745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.425029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.425107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.425430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.425507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.425810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.425888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.426132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.426204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.426519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.426605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.426895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.426972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.427275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.427359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.427655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.427731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.428021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.428082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.428364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.428453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.428729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.428805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.429106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.429177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.429499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.429576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.429867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.429927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.430267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.430343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.430634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.430716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.430970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.431031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.431268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.431330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.431618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.431701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.431970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.432031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.432331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.432401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.432721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.432798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.433094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.433155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.433476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.433552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.433876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.433953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.434276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.434335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.434653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.434730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.435018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.435078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.435399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.435482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.435806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.435882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.436150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.436219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.436546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.436622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.436914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.436989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.437287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.437346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.437645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.437722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.437940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.438017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.556 [2024-07-16 01:18:20.438347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.556 [2024-07-16 01:18:20.438423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.556 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.438745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.438822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.439113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.439174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.439495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.439578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.439864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.439922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.440229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.440298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.440621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.440698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.440972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.441032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.441333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.441402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.441716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.441793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.442099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.442166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.442481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.442557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.442873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.442950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.443267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.443326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.443635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.443711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.443982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.444045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.444301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.444360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.444651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.444735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.445029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.445089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.445412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.445487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.445816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.445891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.446173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.446232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.446570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.446647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.446901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.446974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.447244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.447304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.447629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.447711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.448005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.448065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.448381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.448466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.448737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.448813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.449066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.449127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.449448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.449528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.449821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.449882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.450155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.450216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.450510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.450590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.450886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.450944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.451242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.451314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.451610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.451688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.451920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.451993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.452259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.452337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.452620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.452696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.452976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.453036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.453273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.557 [2024-07-16 01:18:20.453332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.557 qpair failed and we were unable to recover it. 00:25:04.557 [2024-07-16 01:18:20.453657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.453995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.454055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.454392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.454474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.454791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.454879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.455151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.455213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.455495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.455582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.455874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.455949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.456239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.456299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.456575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.456652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.456939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.457010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.457276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.457335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.457667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.457742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.458049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.458111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.458400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.458478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.458816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.458891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.459183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.459247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.459533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.459618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.459876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.459938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 61372 Killed "${NVMF_APP[@]}" "$@" 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.460183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.460217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:04.558 [2024-07-16 01:18:20.460372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.460412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:04.558 [2024-07-16 01:18:20.460575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.460609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.558 [2024-07-16 01:18:20.460760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.460794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.558 [2024-07-16 01:18:20.460951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.460994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.461148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.461181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.461427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.461487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.461743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.461802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.462078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.462113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.462254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.462296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.462452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.462486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.462667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.462700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.462901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.462988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.463317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.463420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.558 qpair failed and we were unable to recover it. 00:25:04.558 [2024-07-16 01:18:20.463672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.558 [2024-07-16 01:18:20.463707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.463859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.463893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.464092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.464127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.464287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.464322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.464534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.464612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=61929 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 61929 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 61929 ']' 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:04.559 01:18:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.559 [2024-07-16 01:18:20.466252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.466287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.466829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.466868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.467106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.467158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.467360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.467425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.467607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.467674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.467806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.467835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.467974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.468004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.468173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.468230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.471970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.472005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.472169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.472199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.472412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.472441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.472566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.472592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.472732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.472759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.472893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.472920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.473072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.473098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.473206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.473233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.473377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.473404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.473528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.473555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.473711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.473739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.473901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.473928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.474064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.474098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.474208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.474235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.474369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.474402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.474535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.474562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.474695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.474722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.474766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c53f0 (9): Bad file descriptor 00:25:04.559 [2024-07-16 01:18:20.474927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.474989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.475114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.475155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.475270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.475299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.475535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.475562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.475717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.475743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.475880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.559 [2024-07-16 01:18:20.475908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.559 qpair failed and we were unable to recover it. 00:25:04.559 [2024-07-16 01:18:20.476033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.476061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.476196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.476224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.476330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.476372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.476523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.476549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.476675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.476704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.476811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.476839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.476968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.476995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.477101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.477129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.477260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.477286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.477415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.477442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.477549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.477575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.477666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.477694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.477833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.477865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.478022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.478049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.478176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.478204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.478355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.478383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.478551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.478579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.478709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.478737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.478895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.478922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.479065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.479092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.479221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.479247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.479346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.479373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.479495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.479522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.479652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.479679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.479792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.479819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.479976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.480003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.480119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.480146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.480266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.480293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.480460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.480604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.480632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.480759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.480786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.480915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.480953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.481061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.481089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.481217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.481267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.481420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.481447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.481587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.481615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.481781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.481808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.481930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.481964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.482094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.482122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.482271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.482299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.482441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.560 [2024-07-16 01:18:20.482468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.560 qpair failed and we were unable to recover it. 00:25:04.560 [2024-07-16 01:18:20.482597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.482626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.482735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.482763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.482893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.482926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.483060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.483088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.483196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.483224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.483364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.483392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.483540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.483567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.483674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.483702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.483848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.483882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.484003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.484031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.484183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.484211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.484368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.484399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.484548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.484575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.484701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.484728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.484866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.484893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.485027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.485054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.485191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.485219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.485375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.485402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.485499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.485527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.485658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.485686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.485874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.485915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.486089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.486118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.486257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.486285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.486409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.486436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.486560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.486586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.486756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.486783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.486931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.486966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.487064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.487090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.487196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.487223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.487345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.487372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.487522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.487548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.487670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.487709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.487828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.487855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.487990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.488017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.488126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.488152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.488281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.488307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.488437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.488464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.488563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.488590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.488747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.488774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.488872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.488900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.561 [2024-07-16 01:18:20.489034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.561 [2024-07-16 01:18:20.489062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.561 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.489189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.489215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.489386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.489413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.489568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.489595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.489733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.489760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.489882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.489909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.490039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.490067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.490217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.490255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.490387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.490419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.490552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.490579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.490707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.490733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.490843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.490874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.491032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.491059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.491193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.491221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.491387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.491414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.491548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.491575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.491700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.491728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.491860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.491887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.492025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.492052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.492203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.492230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.492335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.492362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.492502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.492529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.492629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.492656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.492782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.492809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.492909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.492936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.493080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.493107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.493234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.493271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.493379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.493406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.493564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.493592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.493686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.493712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.493874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.493901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.494017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.494045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.494152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.494178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.494309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.494337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.494460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.494487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.494633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.494660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.494794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.494821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.494964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.494992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.495161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.495188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.495329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.495356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.495488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.495514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.562 [2024-07-16 01:18:20.495661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.562 [2024-07-16 01:18:20.495689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.562 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.495826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.495852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.495987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.496015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.496141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.496168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.496276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.496303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.496453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.496479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.496603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.496630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.496737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.496763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.496871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.496913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.497070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.497099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.497227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.497270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.497427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.497454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.497571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.497598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.497737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.497765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.497891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.497919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.498081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.498122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.498236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.498275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.498380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.498408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.498542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.498570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.498706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.498744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.498850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.498877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.499040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.499159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.499310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.499458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.499591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.499721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.499869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.499984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.500011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.500115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.500142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.500248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.500274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.500427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.500454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.500582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.500608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.500759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.500786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.500884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.500912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.563 [2024-07-16 01:18:20.501057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.563 [2024-07-16 01:18:20.501098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.563 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.501260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.501300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.501440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.501476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.501588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.501617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.501721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.501748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.501849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.501875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.502014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.502041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.502172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.502199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.502325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.502351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.502487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.502514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.502661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.502687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.502780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.502806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.502953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.502985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.503091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.503117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.503244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.503281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.503408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.503439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.503567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.503594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.503712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.503739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.503850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.503877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.504032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.504059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.504164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.504191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.504326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.504353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.504514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.504540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.504644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.504671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.504808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.504835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.504988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.505015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.505138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.505164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.505284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.505310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.505437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.505464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.505603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.505630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.505730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.505757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.505855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.505892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.506029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.506056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.506187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.506214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.506373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.506400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.506562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.506602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.506721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.506764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.506925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.506969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.507113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.507141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.507289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.507314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.507416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.507443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.564 [2024-07-16 01:18:20.507549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.564 [2024-07-16 01:18:20.507577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.564 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.507696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.507726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.507829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.507857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.507977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.508004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.508134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.508160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.508282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.508323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.508431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.508461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.508593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.508624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.508752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.508780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.508912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.508940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.509074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.509101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.509252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.509279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.509401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.509428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.509556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.509583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.509706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.509736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.509851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.509878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.509990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.510017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.510143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.510170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.510280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.510306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.510433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.510460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.510556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.510585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.510693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.510721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.510865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.510906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.511081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.511110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.511237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.511275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.511413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.511439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.511545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.511571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.511674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.511702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.511815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.511842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.511982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.512011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.512138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.512164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.512260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.512286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.512384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.512409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.512570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.512597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.512692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.512718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.512822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.512852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.513011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.513039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.513188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.513216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.513310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.513338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.513473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.513500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.513629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.513656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.565 [2024-07-16 01:18:20.513778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.565 [2024-07-16 01:18:20.513809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.565 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.513944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.513977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.514194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.514222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.514349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.514378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.514504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.514531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.514633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.514661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.514762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.514790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.514891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.514919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.515074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.515115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.515232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.515264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.515386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.515415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.515548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.515575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.515671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.515707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.515803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.515830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.515968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.515996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.516088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.516116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.516231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.516258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.516364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.516391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.516492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.516520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.516671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.516706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.516867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.516898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.517023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.517050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.517181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.517209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.517339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.517367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.517498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.517525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.517625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.517652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.517787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.517818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.517964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.517993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.518098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.518126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.518275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.518302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.518435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.566 [2024-07-16 01:18:20.518461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.566 qpair failed and we were unable to recover it. 00:25:04.566 [2024-07-16 01:18:20.519447] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:25:04.566 [2024-07-16 01:18:20.519512] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.855 [2024-07-16 01:18:20.520970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.521002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.521121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.521149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.521264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.521289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.521420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.521445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.521577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.521602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.521706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.521731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.521839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.521866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.521976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.522003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.522162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.522221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.522475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.522575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.522893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.522930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.523137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.523173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.523436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.523502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.523777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.523843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.524048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.524204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.524333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.524460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.524591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.524745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.524898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.524994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.525022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.525148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.525175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.855 [2024-07-16 01:18:20.525285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.855 [2024-07-16 01:18:20.525311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.855 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.525408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.525437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.525532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.525559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.525687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.525713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.525818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.525845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.525983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.526012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.526144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.526170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.526295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.526321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.526430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.526458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.526584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.526620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.526753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.526780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.526879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.526907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.527071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.527098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.527225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.527251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.527358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.527384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.527504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.527538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.527661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.527688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.527800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.527827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.527962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.527989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.528082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.528108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.528232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.528270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.528485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.528511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.528628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.528655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.528771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.528799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.528930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.528967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.529096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.529130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.529226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.529253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.529378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.529405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.529496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.529523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.529657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.529684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.529775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.529801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.529933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.529981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.530114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.530140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.530241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.530267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.530368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.530394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.530495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.530532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.530633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.530660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.530777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.530804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.530942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.530978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.531074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.531101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.531260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.856 [2024-07-16 01:18:20.531287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.856 qpair failed and we were unable to recover it. 00:25:04.856 [2024-07-16 01:18:20.531384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.531411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.531533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.531560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.531678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.531705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.531839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.531879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.532021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.532050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.532178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.532205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.532355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.532383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.532480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.532507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.532598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.532625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.532730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.532759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.532867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.532895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.533010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.533038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.533187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.533214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.533381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.533408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.533535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.533570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.533711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.533740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.533842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.533869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.534034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.534185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.534335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.534465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.534596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.534720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.534849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.534977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.535008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.535156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.535183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.535313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.535340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.535470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.535496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.535626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.535833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.535873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.535987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.536015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.536167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.536194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.536316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.536352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.536488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.536515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.536646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.536673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.536821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.536847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.536971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.536999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.537127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.537153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.537288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.537315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.537441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.857 [2024-07-16 01:18:20.537467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.857 qpair failed and we were unable to recover it. 00:25:04.857 [2024-07-16 01:18:20.537594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.537620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.537732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.537760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.537911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.537938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.538077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.538103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.538229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.538262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.538355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.538382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.538488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.538515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.538642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.538671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.538774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.538800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.538898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.538924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.539053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.539080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.539235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.539279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.539415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.539443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.539538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.539567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.539673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.539702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.539832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.539859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.539976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.540005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.540099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.540125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.540251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.540278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.540376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.540403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.540525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.540552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.540654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.540681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.540888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.540916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.541073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.541101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.541248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.541280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.541433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.541461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.541588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.541616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.541748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.541774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.541904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.541931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.542047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.542075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.542288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.542315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.542448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.542475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.542625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.542653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.542769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.542796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.542923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.542967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.543088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.543116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.543241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.543267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.543397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.543424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.543581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.543608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.543759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.543786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.543997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.858 [2024-07-16 01:18:20.544025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.858 qpair failed and we were unable to recover it. 00:25:04.858 [2024-07-16 01:18:20.544182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.544209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.544333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.544360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.544466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.544494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.544597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.544625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.544780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.544807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.544902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.544930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.545076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.545115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.545274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.545310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.545435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.545463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.545588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.545614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.545775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.545810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.545935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.545975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.546086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.546114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.546220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.546256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.546400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.546426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.546574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.546600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.546725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.546752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.546912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.546938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.547071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.547098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.547200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.547227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.547384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.547410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.547532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.547559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.547679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.547705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.547823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.547853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.547993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.548021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.548149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.548175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.548289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.548317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.548468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.548494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.548616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.548644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.548760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.548787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.548910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.548937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.549051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.549077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.549200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.549226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.859 [2024-07-16 01:18:20.549372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.859 [2024-07-16 01:18:20.549398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.859 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.549528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.549554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.549706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.549732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.549854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.549880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.550022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.550049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.550166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.550207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.550362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.550391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.550514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.550541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.550668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.550696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.550821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.550848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.550959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.550987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.551110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.551136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.551289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.551315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.551461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.551488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.551641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.551669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.551791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.551817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.551911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.551937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.552060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.552088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.552207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.552234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.552331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.552357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.552485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.552512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.552665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.552692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.552818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.552844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.553004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.553032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.553163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.553190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.553320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.553346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.553474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.553502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.553634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.553660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.553822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.553849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.553982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.554009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.554113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.554146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.554264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.554291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.554417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.554443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.554571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.554598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.554695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.554722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.554820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.554847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.554981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.555008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.555109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.555136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.555225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.555259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.555356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.555382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.555485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.555512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.860 qpair failed and we were unable to recover it. 00:25:04.860 [2024-07-16 01:18:20.555642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.860 [2024-07-16 01:18:20.555668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.555769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.555797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.555919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.555961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.556068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.556095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.556200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.556228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.556331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.556357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.556465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.556491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.556598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.556639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.556774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.556804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.556928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.556971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.557079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.557106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.557233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.557267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.557392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.557420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.557530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.557558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.557652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.557679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.557801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.557828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.557964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.557994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.558119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.558146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.558280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.558308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.558409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.558437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.558563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.558589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.558700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.558740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.558886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.558914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.559027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.559053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.559172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.559199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.559337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.559364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.559490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.559516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.559642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.559671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.559798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.559824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.559950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.559988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.560095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.560123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.560264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.560292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.560450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.560477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.560598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.560625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.560754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.560783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.560910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.560938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.561076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.561104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.561212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.561240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.561336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.561371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.561491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.561517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.561621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.561648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.861 qpair failed and we were unable to recover it. 00:25:04.861 [2024-07-16 01:18:20.561770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.861 [2024-07-16 01:18:20.561796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.561900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.561927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.562085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.562113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.562257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.562284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.562434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.562462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.562552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.562579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.562683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.862 [2024-07-16 01:18:20.562710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.562837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.562864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.562967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.562994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.563085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.563112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.563213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.563239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.563363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.563390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.563541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.563568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.563692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.563720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.563819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.563847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.564012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.564040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.564136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.564163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.564267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.564295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.564420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.564448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.564554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.564581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.564709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.564737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.564866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.564893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.565031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.565157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.565315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.565462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.565582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.565732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.565890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.565996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.566135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.566272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.566404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.566562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.566692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.566815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.566962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.566989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.567094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.567120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.567221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.567256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.567344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.567370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.567469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.862 [2024-07-16 01:18:20.567496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.862 qpair failed and we were unable to recover it. 00:25:04.862 [2024-07-16 01:18:20.567598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.567625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.567728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.567755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.567896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.567936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.568078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.568118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.568247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.568273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.568397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.568422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.568518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.568544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.568638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.568663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.568757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.568783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.568962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.568993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.569096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.569123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.569218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.569254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.569345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.569371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.569510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.569551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.569655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.569683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.569792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.569819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.569941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.569973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.570097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.570124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.570221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.570259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.570389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.570415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.570550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.570581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.570707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.570734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.570882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.570909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.571079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.571207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.571341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.571463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.571590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.571720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.571874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.571988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.572017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.572123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.572150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.572253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.572280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.572446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.572473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.572613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.572640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.572764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.572791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.863 qpair failed and we were unable to recover it. 00:25:04.863 [2024-07-16 01:18:20.572895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.863 [2024-07-16 01:18:20.572924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.573077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.573117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.573227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.573256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.573415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.573443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.573575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.573601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.573710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.573739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.573842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.573870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.573990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.574122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.574268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.574393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.574512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.574653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.574801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.574949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.574987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.575093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.575118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.575240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.575265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.575385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.575411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.575539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.575573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.575664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.575690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.575817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.575843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.575945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.576143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.576253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.576280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.576432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.576457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.576547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.576572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.576696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.576722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.576832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.576858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.576966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.576995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.577100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.577128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.577249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.577276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.577425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.577452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.577545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.577572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.577711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.577740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.577872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.577900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.578018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.578044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.578146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.578172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.578295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.578321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.578429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.578454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.578557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.578584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.578687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.578713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.578838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.864 [2024-07-16 01:18:20.578864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.864 qpair failed and we were unable to recover it. 00:25:04.864 [2024-07-16 01:18:20.579004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.579134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.579281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.579433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.579556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.579687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.579807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.579934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.579966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.580062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.580088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.580213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.580239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.580339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.580365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.580504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.580529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.580630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.580670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.580802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.580831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.580928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.580964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.581092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.581119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.581247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.581275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.581382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.581409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.581554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.581581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.581709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.581735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.581841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.581867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.581961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.581987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.582086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.582113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.582247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.582274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.582397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.582424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.582546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.582572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.582681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.582721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.582854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.582882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.582999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.583026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.583153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.583180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.583291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.583317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.583413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.583439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.583588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.583615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.583760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.583786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.583914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.583940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.584074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.584102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.584206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.584236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.584364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.584392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.584488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.584515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.584645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.584673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.584787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.865 [2024-07-16 01:18:20.584828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-07-16 01:18:20.584950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.584983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.585086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.585113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.585218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.585255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.585374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.585400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.585508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.585534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.585639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.585669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.585769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.585796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.585931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.585977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.586091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.586118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.586218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.586255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.586381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.586408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.586504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.586530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.586662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.586687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.586818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.586858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.586977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.587111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.587235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.587392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.587527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.587652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.587804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.587948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.587998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.588109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.588137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.588264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.588291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.588392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.588419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.588542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.588569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.588697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.588724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.588835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.588864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.588976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.589115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.589246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.589377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.589505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.589659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.589816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.589949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.589982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.590086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.590113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.590236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.590263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.590363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.590390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.590510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.590537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.590636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.866 [2024-07-16 01:18:20.590665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-07-16 01:18:20.590798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.590825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.590924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.590960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.591068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.591096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.591192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.591219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.591315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.591342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.591489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.591516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.591619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.591651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.591794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.591822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.591918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.591945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.592084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.592112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.592254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.592282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.592420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.592446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.592597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.592624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.592732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.592759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.592880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.592907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.593002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.593030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.593142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.593169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.593298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.593325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.593451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.593479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.593576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.593604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.593731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.593759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.593893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.593934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.594049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.594077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.594179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.594204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.594334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.594359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.594475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.594501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.594596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.594623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.594717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.594743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.594876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.594915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.595042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.595088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.595200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.595230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.595323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.595350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.595454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.595481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.595589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.595617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.595742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.595769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.595895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.595920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.596023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.596049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.596155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.596180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.596330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.596356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.596478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.596504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.596653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.867 [2024-07-16 01:18:20.596681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-07-16 01:18:20.596824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.596864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.596899] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:04.868 [2024-07-16 01:18:20.596978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.597007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.597152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.597180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.597278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.597305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.597426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.597453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.597577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.597603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.597742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.597768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.597899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.597925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.598043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.598071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.598175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.598205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.598336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.598363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.598462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.598489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.598615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.598641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.598771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.598797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.598894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.598921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.599047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.599088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.599231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.599272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.599398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.599427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.599534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.599564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.599687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.599714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.599843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.599873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.599974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.600004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.600140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.600170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.600323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.600351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.600449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.600475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.600599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.600626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.600729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.600758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.600933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.600987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.601107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.601141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.601250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.601278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.601374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.601401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.601499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.601526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.601621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.601648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.601774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.601800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.601921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.601950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.868 qpair failed and we were unable to recover it. 00:25:04.868 [2024-07-16 01:18:20.602061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.868 [2024-07-16 01:18:20.602088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.602194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.602221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.602371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.602398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.602521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.602548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.602641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.602668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.602799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.602826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.602932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.602979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.603101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.603128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.603227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.603253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.603345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.603371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.603463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.603489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.603618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.603644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.603770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.603796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.603893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.603919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.604022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.604049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.604153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.604179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.604281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.604307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.604457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.604483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.604591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.604621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.604766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.604807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.604968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.605004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.605102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.605130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.605258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.605286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.605433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.605461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.605564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.605592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.605723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.605752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.605853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.605880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.606019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.606049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.606172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.606199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.606306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.606333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.606455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.606481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.606622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.606648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.606746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.606773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.606885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.606912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.607044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.607074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.607214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.607254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.607412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.607440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.607586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.607613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.607740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.607767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.607897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.607924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.608031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.608058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.869 qpair failed and we were unable to recover it. 00:25:04.869 [2024-07-16 01:18:20.608182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.869 [2024-07-16 01:18:20.608208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.608337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.608363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.608494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.608522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.608651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.608678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.608807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.608834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.608963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.608990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.609102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.609132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.609281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.609309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.609427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.609454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.609580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.609606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.609713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.609740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.609837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.609866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.609985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.610013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.610139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.610167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.610308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.610335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.610432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.610459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.610597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.610626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.610758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.610785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.610934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.610976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.611080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.611114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.611239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.611266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.611371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.611401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.611541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.611582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.611725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.611754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.611858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.611885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.611985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.612116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.612256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.612382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.612541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.612670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.612796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.612961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.612989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.613119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.613147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.613249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.613278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.613411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.613439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.613541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.613568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.613723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.613751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.613881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.613909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.614012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.614039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.614174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.870 [2024-07-16 01:18:20.614204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.870 qpair failed and we were unable to recover it. 00:25:04.870 [2024-07-16 01:18:20.614361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.614389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.614516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.614543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.614678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.614705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.614825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.614867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.614977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.615005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.615116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.615146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.615255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.615282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.615431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.615459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.615592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.615621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.615776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.615805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.615919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.615965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.616103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.616132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.616266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.616294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.616396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.616423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.616581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.616608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.616701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.616730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.616838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.616865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.616984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.617014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.617170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.617202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.617308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.617336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.617434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.617460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.617563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.617589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.617714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.617741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.617861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.617887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.618010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.618038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.618168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.618195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.618321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.618348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.618475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.618501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.618599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.618624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.618734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.618764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.618916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.618942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.619052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.619079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.619190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.619217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.619322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.619350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.619491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.619518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.619644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.619671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.619828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.619855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.619997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.620039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.620151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.620181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.620333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.620359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.620481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.871 [2024-07-16 01:18:20.620508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.871 qpair failed and we were unable to recover it. 00:25:04.871 [2024-07-16 01:18:20.620632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.620658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.620758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.620784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.620880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.620907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.621017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.621048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.621150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.621184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.621319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.621347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.621449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.621477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.621584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.621624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.621724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.621752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.621878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.621906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.622007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.622036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.622170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.622196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.622317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.622344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.622467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.622494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.622599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.622625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.622755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.622783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.622907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.622934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.623079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.623106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.623225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.623265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.623425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.623453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.623550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.623578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.623679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.623706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.623804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.623831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.623963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.623991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.624122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.624148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.624256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.624286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.624418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.624446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.624599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.624627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.624721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.624748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.624872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.624899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.625008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.625037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.625146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.625178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.625303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.625341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.625466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.625494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.625601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.625630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.625774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.872 [2024-07-16 01:18:20.625814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.872 qpair failed and we were unable to recover it. 00:25:04.872 [2024-07-16 01:18:20.625953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.625994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.626100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.626129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.626234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.626262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.626370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.626397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.626500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.626526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.626654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.626681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.626805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.626831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.626961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.626988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.627084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.627111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.627213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.627241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.627367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.627394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.627547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.627574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.627682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.627710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.627841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.627870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.627995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.628023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.628126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.628154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.628287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.628314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.628437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.628464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.628590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.628618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.628718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.628745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.628898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.628925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.629045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.629074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.629178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.629205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.629327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.629354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.629457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.629484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.629645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.629673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.629823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.629850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.629958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.629986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.630088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.630116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.630241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.630269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.630370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.630397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.630525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.630553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.630674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.630701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.630859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.630900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.631020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.631049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.631146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.631178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.631273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.631311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.631415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.631443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.631544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.631570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.631670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.631697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.873 [2024-07-16 01:18:20.631825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.873 [2024-07-16 01:18:20.631852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.873 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.631979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.632028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.632137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.632164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.632285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.632311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.632434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.632461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.632555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.632581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.632690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.632732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.632898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.632939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.633078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.633106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.633237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.633264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.633389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.633416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.633506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.633532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.633656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.633682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.633821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.633863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.633976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.634104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.634224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.634379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.634511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.634666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.634804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.634968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.634997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.635107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.635136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.635230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.635256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.635357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.635387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.635489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.635516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.635606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.635633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.635757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.635784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.635913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.635942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.636052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.636079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.636184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.636212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.636305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.636332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.636429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.636456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.636582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.636610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.636734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.636761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.636885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.636916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.637018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.637045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.637166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.637193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.637321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.637348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.637438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.637464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.637557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.637584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.874 [2024-07-16 01:18:20.637705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.874 [2024-07-16 01:18:20.637732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.874 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.637848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.637889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.638029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.638059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.638157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.638184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.638309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.638335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.638436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.638465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.638574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.638601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.638728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.638755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.638887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.638914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.639963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.639991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.640092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.640120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.640226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.640253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.640348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.640376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.640509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.640537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.640676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.640707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.640819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.640848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.640993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.641034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.641155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.641185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.641317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.641343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.641462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.641489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.641615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.641641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.641742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.641768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.641890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.641930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.642072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.642100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.642220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.642248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.642400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.642427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.642634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.642660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.642811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.642838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.642978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.643006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.643112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.643139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.643244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.643273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.643399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.643426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.643527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.643555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.643652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.643680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.875 [2024-07-16 01:18:20.643787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.875 [2024-07-16 01:18:20.643814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.875 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.643914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.643941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.644051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.644080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.644193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.644222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.644348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.644374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.644471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.644497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.644624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.644650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.644768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.644810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.644926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.644962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.645070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.645099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.645307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.645348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.645507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.645533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.645666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.645692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.645822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.645849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.645943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.645985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.646088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.646115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.646224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.646253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.646365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.646406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.646565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.646593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.646741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.646768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.646892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.646925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.647035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.647064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.647169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.647197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.647352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.647379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.647485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.647513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.647632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.647659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.647785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.647812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.647903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.647930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.648042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.648072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.648177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.648204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.648304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.648331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.648542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.648569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.648719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.648746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.648890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.648931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.649047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.876 [2024-07-16 01:18:20.649076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.876 qpair failed and we were unable to recover it. 00:25:04.876 [2024-07-16 01:18:20.649206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.649233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.649382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.649409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.649541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.649568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.649668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.649696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.649790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.649817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.649953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.650004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.650135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.650164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.650261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.650289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.650392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.650420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.650561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.650601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.650699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.650728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.650887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.650917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.651036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.651066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.651195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.651223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.651328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.651355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.651485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.651514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.651640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.651670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.651771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.651800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.651899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.651925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.652037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.652065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.652216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.652243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.652393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.652419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.652522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.652548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.652676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.652702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.652801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.652829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.652931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.652971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.653101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.653127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.653229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.653256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.653411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.653437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.653538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.653564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.653697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.653723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.653847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.653874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.653982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.654023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.654169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.654210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.654349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.654377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.654502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.654529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.654649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.654675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.654802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.654829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.654928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.877 [2024-07-16 01:18:20.654968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.877 qpair failed and we were unable to recover it. 00:25:04.877 [2024-07-16 01:18:20.655101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.655128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.655227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.655254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.655355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.655383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.655503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.655529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.655656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.655682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.655786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.655813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.655951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.655999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.656110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.656151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.656254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.656283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.656408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.656436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.656590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.656618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.656719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.656748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.656871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.656898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.657026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.657057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.657154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.657180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.657283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.657309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.657432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.657459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.657582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.657607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.657734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.657762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.657892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.657920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.658060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.658088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.658221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.658247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.658373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.658401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.658542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.658569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.658700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.658728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.658828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.658855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.658991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.659019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.659121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.659150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.659255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.659282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.659384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.659412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.659543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.659572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.659709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.659750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.659865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.659892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.660047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.660075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.660182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.660209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.660318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.660344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.660471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.660498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.660627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.660656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.660764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.660790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.660899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.660925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.661032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.878 [2024-07-16 01:18:20.661059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.878 qpair failed and we were unable to recover it. 00:25:04.878 [2024-07-16 01:18:20.661187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.661213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.661335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.661362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.661521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.661547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.661644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.661671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.661790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.661831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.661971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.662001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.662129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.662158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.662288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.662316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.662439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.662467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.662576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.662604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.662699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.662726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.662869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.662910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.663053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.663088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.663190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.663217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.663311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.663337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.663440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.663467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.663601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.663630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.663777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.663804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.663896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.663922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.664061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.664089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.664186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.664213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.664335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.664361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.664489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.664517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.664633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.664663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.664814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.664855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.664964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.664992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.665208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.665235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.665388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.665415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.665543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.665570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.665723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.665750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.665852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.665878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.666021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.666062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.666177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.666217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.666350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.666378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.666480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.666506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.666634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.666660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.666755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.666781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.666931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.666965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.667067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.667093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.667196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.667228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.667357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.879 [2024-07-16 01:18:20.667385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.879 qpair failed and we were unable to recover it. 00:25:04.879 [2024-07-16 01:18:20.667540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.667566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.667683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.667710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.667831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.667858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.667987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.668015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.668120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.668147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.668251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.668278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.668402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.668428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.668562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.668589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.668698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.668725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.668847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.668873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.668973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.669000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.669125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.669151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.669277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.669304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.669407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.669433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.669537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.669565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.669683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.669711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.669840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.669867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.669985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.670026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.670161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.670189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.670336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.670362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.670462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.670490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.670589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.670615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.670736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.670764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.670893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.670919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.671028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.671055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.671167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.671193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.671301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.671327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.671454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.671480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.671610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.671636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.671788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.671817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.671914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.671940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.672086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.672126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.672265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.672294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.672421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.672450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.672584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.672612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.672738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.672765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.672890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.672917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.673023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.673052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.673179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.673213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.673342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.673369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.880 [2024-07-16 01:18:20.673496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.880 [2024-07-16 01:18:20.673524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.880 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.673673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.673701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.673804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.673831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.673992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.674020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.674149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.674176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.674271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.674297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.674403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.674430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.674552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.674580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.674729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.674756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.674882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.674909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.675047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.675074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.675172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.675200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.675342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.675370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.675498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.675525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.675615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.675642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.675783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.675824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.675932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.675972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.676103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.676130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.676235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.676261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.676386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.676412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.676552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.676578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.676675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.676701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.676827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.676853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.676960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.676987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.677116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.677142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.677240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.677272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.677370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.677396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.677530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.677556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.677657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.677684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.677797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.677838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.677998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.678028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.678136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.678177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.678285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.678314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.678413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.678440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.678564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.678590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.881 [2024-07-16 01:18:20.678713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.881 [2024-07-16 01:18:20.678739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.881 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.678867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.678893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.679047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.679174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.679308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.679441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.679572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.679699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.679826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.679978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.680007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.680105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.680131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.680233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.680261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.680385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.680411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.680512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.680538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.680662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.680689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.680839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.680865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.681017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.681049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.681189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.681230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.681390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.681419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.681569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.681597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.681722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.681750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.681855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.681883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.681979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.682007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.682138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.682165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.682271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.682299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.682423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.682451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.682582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.682608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.682711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.682737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.682870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.682900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.683072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.683113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.683255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.683295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.683436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.683473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.683599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.683626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.683729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.683757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.683880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.683906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.684041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.684071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.684175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.684202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.684303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.684330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.684432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.684460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.684575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.684606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.684721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.684763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.882 [2024-07-16 01:18:20.684927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.882 [2024-07-16 01:18:20.684970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.882 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.685080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.685107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.685209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.685235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.685349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.685375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.685525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.685551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.685653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.685679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.685800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.685826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.685960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.685990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.686105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.686134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.686271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.686299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.686421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.686448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.686560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.686590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.686716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.686746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.686863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.686904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.687039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.687067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.687167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.687193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.687324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.687356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.687485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.687511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.687641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.687670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.687808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.687849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.687997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.688038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.688160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.688200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.688316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.688347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.688453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.688481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.688612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.688640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.688796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.688825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.688984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.689025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.689156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.689184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.689308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.689335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.689476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.689503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.689641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.689670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.689804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.689832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.689978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.690125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.690278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.690408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.690527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.690646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.690820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.690947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.690980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.691125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.691151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.691253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.883 [2024-07-16 01:18:20.691280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.883 qpair failed and we were unable to recover it. 00:25:04.883 [2024-07-16 01:18:20.691407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.691435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.691527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.691558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.691672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.691700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.691853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.691880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.692010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.692038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.692162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.692190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.692310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.692337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.692469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.692496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.692597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.692623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.692750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.692777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.692879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.692905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.693038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.693065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.693165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.693193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.693316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.693342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.693459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.693486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.693619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.693646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.693770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.693798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.693934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.693984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.694104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.694144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.694257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.694298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.694408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.694438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.694574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.694601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.694701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.694728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.694851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.694879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.695019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.695049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.695173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.695201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.695326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.695352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.695474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.695501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.695634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.695660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.695763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.695789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.695913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.695939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.696098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.696123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.696252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.696294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.696402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.696429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.696531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.696558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.696691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.696718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.696836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.696863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.696960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.696987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.697087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.697116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.697242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.697268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.697421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.884 [2024-07-16 01:18:20.697446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.884 qpair failed and we were unable to recover it. 00:25:04.884 [2024-07-16 01:18:20.697566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.697594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.697736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.697763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.697891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.697918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.698052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.698079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.698202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.698229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.698384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.698410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.698508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.698535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.698665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.698691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.698824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.698850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.698981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.699009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.699105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.699132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.699250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.699290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.699402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.699431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.699551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.699579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.699696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.699724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.699851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.699878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.700015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.700056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.700192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.700222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.700351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.700379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.700495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.700523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.700649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.700677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.700812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.700839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.700974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.701103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.701224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.701340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.701490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.701611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.701739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.701917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.701947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.702080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.702108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.702236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.702263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.702387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.702414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.702533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.702574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.702703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.702732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.702854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.885 [2024-07-16 01:18:20.702883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.885 qpair failed and we were unable to recover it. 00:25:04.885 [2024-07-16 01:18:20.703011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.703038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.703137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.703164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.703292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.703319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.703423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.703450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.703577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.703607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.703748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.703777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.703886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.703912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.704055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.704203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.704332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.704480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.704611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.704733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.704884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.704989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.705018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.705149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.705177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.705301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.705329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.705458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.705485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.705600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.705641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.705774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.705803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.705933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.705966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.706064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.706090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.706183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.706210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.706360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.706386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.706506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.706532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.706627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.706654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.706779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.706808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.706914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.706941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.707043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.707069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.707168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.707194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.707297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.707324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.707449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.707475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.707575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.707601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.707752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.707782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.707906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.707933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.708067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.708094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.708212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.708238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.708328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.708355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.708457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.708483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.708609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.708640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.708749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.886 [2024-07-16 01:18:20.708790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.886 qpair failed and we were unable to recover it. 00:25:04.886 [2024-07-16 01:18:20.708915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.708962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.709060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.709089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.709236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.709263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.709368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.709396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.709523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.709550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.709647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.709675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.709784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.709811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.709914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.710083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.710109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.710212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.710240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.710338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.710365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.710483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.710509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.710624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.710665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.710780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.710820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.710917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.710946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.711060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.711086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.711177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.711204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.711296] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.887 [2024-07-16 01:18:20.711311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.711336] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.887 [2024-07-16 01:18:20.711340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 [2024-07-16 01:18:20.711352] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.711365] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.887 [2024-07-16 01:18:20.711377] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.887 [2024-07-16 01:18:20.711473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.711558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 [2024-07-16 01:18:20.711450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.711548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:04.887 [2024-07-16 01:18:20.711745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.711636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:04.887 [2024-07-16 01:18:20.711771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b9[2024-07-16 01:18:20.711643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:04.887 0 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.711897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.711922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.712034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.712063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.712199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.712225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.712324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.712351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.712487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.712514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.712622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.712653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.712776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.712814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.712917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.712949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.713084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.713111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.713215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.713243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.713370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.713396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.713492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.713518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.713614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.713641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.713763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.713808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.713912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.713941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.714066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.714093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.714227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.714255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.887 [2024-07-16 01:18:20.714363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.887 [2024-07-16 01:18:20.714389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.887 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.714485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.714513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.714613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.714641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.714752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.714780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.714913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.714940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.715047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.715074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.715170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.715197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.715332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.715358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.715488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.715515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.715618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.715645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.715763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.715790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.715892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.715919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.716069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.716110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.716211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.716240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.716360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.716388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.716487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.716513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.716619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.716645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.716766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.716795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.716922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.716970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.717094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.717121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.717219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.717257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.717413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.717440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.717535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.717563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.717666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.717694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.717828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.717857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.717982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.718010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.718111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.718139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.718251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.718278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.718375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.718403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.718503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.718529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.718657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.718692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.718814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.718855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.718997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.719131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.719253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.719380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.719517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.719665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.719795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.719922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.719965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.720073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.720100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.888 qpair failed and we were unable to recover it. 00:25:04.888 [2024-07-16 01:18:20.720190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.888 [2024-07-16 01:18:20.720216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.720337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.720366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.720467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.720494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.720630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.720671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.720797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.720825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.720926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.720966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.721061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.721089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.721195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.721223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.721327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.721355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.721466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.721495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.721593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.721621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.721730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.721778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.721887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.721916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.722027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.722055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.722155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.722192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.722295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.722322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.722419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.722452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.722566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.722594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.722693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.722723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.722848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.722896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.723072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.723204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.723331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.723465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.723595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.723724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.723883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.723994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.724022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.724150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.724177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.724286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.724315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.724425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.724453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.724565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.724592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.724692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.724718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.724823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-07-16 01:18:20.724850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.889 qpair failed and we were unable to recover it. 00:25:04.889 [2024-07-16 01:18:20.724992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.725125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.725246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.725378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.725499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.725670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.725795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.725924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.725963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.726085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.726111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.726259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.726286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.726376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.726403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.726539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.726566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.726690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.726716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.726836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.726862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.726964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.726994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.727096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.727124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.727237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.727278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.727377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.727411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.727552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.727593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.727728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.727757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.727853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.727880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.728922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.728978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.729071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.729097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.729190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.729217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.729353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.729379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.729515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.729540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.729629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.729655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.729753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.729791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.729891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.729920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.730036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.730064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.730167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.730195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.730295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.730322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.730439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.730479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.730586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.890 [2024-07-16 01:18:20.730623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.890 qpair failed and we were unable to recover it. 00:25:04.890 [2024-07-16 01:18:20.730715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.730740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.730860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.730886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.731019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.731046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.731149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.731175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.731289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.731317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.731426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.731453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.731586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.731612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.731711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.731741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.731860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.731888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.732016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.732044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.732137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.732164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.732279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.732307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.732442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.732469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.732595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.732622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.732738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.732764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.732885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.732910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.733059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.733187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.733320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.733451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.733598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.733719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.733845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.733992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.734033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.734177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.734218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.734331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.734360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.734485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.734511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.734721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.734747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.734840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.734868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.734986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.735137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.735257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.735406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.735528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.735662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.735790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.735916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.735952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.736048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.736074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.736170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.736197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.736302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.736332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.891 [2024-07-16 01:18:20.736426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.891 [2024-07-16 01:18:20.736453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.891 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.736577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.736603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.736697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.736722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.736815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.736841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.736937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.736977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.737099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.737126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.737350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.737402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.737515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.737543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.737652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.737680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.737782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.737810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.737905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.737932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.738050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.738077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.738172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.738200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.738367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.738393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.738517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.738543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.738645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.738674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.738778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.738809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.738905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.738931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.739040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.739066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.739170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.739196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.739305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.739331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.739461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.739494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.739600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.739626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.739841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.739868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.739964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.739992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.740090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.740117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.740209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.740237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.740338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.740365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.740455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.740481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.740610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.740636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.740770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.740795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.740919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.740944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.741046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.741072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.741182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.741207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.741316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.741342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.741437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.741463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.741601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.741630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.741729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.741756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.741852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.741879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.742027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.742066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.742217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.742254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.892 [2024-07-16 01:18:20.742379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.892 [2024-07-16 01:18:20.742406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.892 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.742538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.742565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.742694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.742720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.742819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.742844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.742982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.743109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.743233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.743403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.743524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.743658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.743794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.743943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.743998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.744114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.744144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.744252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.744280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.744386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.744413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.744505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.744531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.744645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.744674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.744799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.744828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.744948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.745005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.745124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.745159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.745299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.745329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.745457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.745491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.745587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.745614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.745708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.745742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.745863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.745889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.746013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.746039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.746145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.746172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.746295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.746331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.746454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.746479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.746567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.746593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.746720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.746745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.746865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.746891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.747028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.747069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.747176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.747205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.747363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.747390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.893 [2024-07-16 01:18:20.747489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.893 [2024-07-16 01:18:20.747527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.893 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.747655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.747683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.747807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.747834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.747927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.747961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.748081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.748232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.748361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.748485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.748615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.748754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.748888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.748998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.749031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.749140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.749170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.749285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.749320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.749446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.749474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.749570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.749597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.749699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.749726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.749852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.749887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.750907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.750951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.751055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.751083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.751192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.751232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.751337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.751367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.751470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.751498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.751632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.751659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.751754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.751782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.751877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.751904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.752018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.752047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.752143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.752171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.752315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.752344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.752439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.752466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.752588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.752615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.752762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.752813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.752925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.752969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.753075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.753102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.753209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.753247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.753378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.753416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.753506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.894 [2024-07-16 01:18:20.753533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.894 qpair failed and we were unable to recover it. 00:25:04.894 [2024-07-16 01:18:20.753658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.753685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.753797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.753837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.753950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.754024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.754132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.754159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.754298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.754336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.754430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.754457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.754582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.754607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.754739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.754767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.754881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.754935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.755073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.755102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.755207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.755234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.755371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.755398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.755493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.755520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.755616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.755643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.755763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.755791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.755904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.755967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.756098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.756127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.756226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.756266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.756377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.756404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.756500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.756527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.756625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.756654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.756787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.756815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.756931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.756981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.757088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.757114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.757243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.757281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.757388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.757416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.757520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.757550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.757675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.757701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.757824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.757854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.757952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.757983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.758081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.758108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.758205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.758231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.758359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.758386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.758493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.758520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.758617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.758656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.758783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.758812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.758927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.758981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.759089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.759116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.759254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.759280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.759379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.759405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.759516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.759554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.759679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.759707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.759839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.759866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.759971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.760000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.895 qpair failed and we were unable to recover it. 00:25:04.895 [2024-07-16 01:18:20.760125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.895 [2024-07-16 01:18:20.760152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.760252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.760279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.760387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.760414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.760540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.760568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.760699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.760731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.760822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.760848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.760982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.761103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.761237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.761377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.761505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.761636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.761758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.761901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.761929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.762044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.762071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.762191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.762218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.762318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.762345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.762452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.762481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.762586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.762612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.762706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.762732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.762858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.762885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.763008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.763037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.763144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.763171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.763290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.763318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.763440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.763468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.763577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.763604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.763738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.763766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.763932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.763980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.764106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.764143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.764241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.764268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.764389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.764416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.764515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.764542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.764642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.764671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.764822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.764862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.764966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.764994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.765093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.765127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.765231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.765257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.765362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.765387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.765497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.765529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.765627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.765654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.765766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.765806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.765926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.765962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.766067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.766105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.766204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.766233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.896 qpair failed and we were unable to recover it. 00:25:04.896 [2024-07-16 01:18:20.766359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.896 [2024-07-16 01:18:20.766393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.766497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.766525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.766620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.766650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.766746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.766773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.766901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.766930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.767039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.767068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.767170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.767198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.767353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.767379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.767484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.767510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.767632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.767659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.767761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.767798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.767898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.767927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.768085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.768114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.768210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.768238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.768369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.768396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.768519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.768546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.768656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.768684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.768776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.768804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.768917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.768964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.769101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.769129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.769239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.769266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.769388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.769415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.769532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.769559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.769649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.769675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.769795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.769835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.769968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.769997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.770101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.770128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.770232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.770264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.770415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.770442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.770582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.770610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.770721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.770748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.770856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.770887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.771964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.897 [2024-07-16 01:18:20.771997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.897 qpair failed and we were unable to recover it. 00:25:04.897 [2024-07-16 01:18:20.772097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.772124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.772230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.772258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.772393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.772420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.772521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.772555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.772646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.772672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.772764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.772791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.772888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.772914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.773031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.773060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.773173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.773213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.773319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.773349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.773482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.773509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.773614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.773641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.773773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.773799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.773924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.773951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.774076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.774117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.774230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.774259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.774367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.774394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.774496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.774523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.774621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.774651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.774785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.774812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.774936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.774971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.775081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.775119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.778059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.778114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.778233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.778263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.778366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.778394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.778538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.778566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.778702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.778729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.778831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.778864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.778995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.779023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.779154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.779194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.779311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.779340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.779439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.779467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.898 [2024-07-16 01:18:20.779591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.898 [2024-07-16 01:18:20.779618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.898 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.779750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.779788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.779884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.779911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.780042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.780069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.780183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.780210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.780310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.780338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.780469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.780496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.780618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.780645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.780746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.780773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.780921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.780950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.781086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.781209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.781372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.781495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.781615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.781743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.781876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.781993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.782036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.782150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.782178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.782292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.782334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.782467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.782497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.782644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.782673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.782800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.782827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.782930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.782961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.783081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.783220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.783347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.783480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.783606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.783726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.783869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.783992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.784019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.784133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.784174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.784295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.784324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.784439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.784468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.784569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.784602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.784721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.784768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.784891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.784931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.785059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.785087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.785188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.785216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.785324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.899 [2024-07-16 01:18:20.785350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.899 qpair failed and we were unable to recover it. 00:25:04.899 [2024-07-16 01:18:20.785475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.785504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.785605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.785631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.785734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.785765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.785861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.785888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.785986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.786121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.786255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.786402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.786539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.786663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.786820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.786965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.786993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.787095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.787122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.787215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.787242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.787344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.787370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.787497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.787524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.787622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.787649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.787759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.787800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.787904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.787932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.788052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.788101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.788212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.788242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.788366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.788409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.788533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.788560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.788666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.788693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.788804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.788843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.788964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.788993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.789118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.789144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.789248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.789274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.789374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.789401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.789536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.789562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.789658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.789687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.789805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.789846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.789979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.790112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.790229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.790382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.790503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.790623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.790762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.790891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.790920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.791026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.900 [2024-07-16 01:18:20.791052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.900 qpair failed and we were unable to recover it. 00:25:04.900 [2024-07-16 01:18:20.791153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.791179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.791285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.791313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.791436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.791467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.791568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.791595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.791698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.791725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.791824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.791851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.791988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.792132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.792259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.792392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.792513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.792648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.792777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.792897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.792923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.793970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.793999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.794119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.794159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.794259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.794286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.794423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.794450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.794574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.794611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.794737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.794763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.794872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.794901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.795006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.795032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.795133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.795169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.795290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.795316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.795422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.795449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.795554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.795580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.795724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.795752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.795894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.795945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.796103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.796132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.796234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.796261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.796359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.796386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.796514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.796541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.796647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.796675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.901 [2024-07-16 01:18:20.796777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.901 [2024-07-16 01:18:20.796806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.901 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.796905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.796933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.797041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.797070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.797230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.797269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.797386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.797432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.797563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.797591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.797725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.797752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.797855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.797887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.798964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.798992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.799090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.799117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.799240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.799267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.799403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.799429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.799527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.799554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.799661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.799688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.799798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.799824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.799964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.799990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.800087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.800112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.800209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.800236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.800326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.800351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.800493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.800523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.800627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.800654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.800749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.800776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.800898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.800925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.801049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.801089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.801205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.801244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.801373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.801401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.902 [2024-07-16 01:18:20.801520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.902 [2024-07-16 01:18:20.801546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.902 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.801657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.801691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.801805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.801831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.802046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.802075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.802179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.802206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.802306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.802334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.802537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.802565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.802670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.802697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.802791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.802818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.802951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.802983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.803083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.803110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.803212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.803239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.803344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.803371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.803470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.803497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.803599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.803637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.803740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.803767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.803900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.803927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.804039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.804066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.804164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.804199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.804301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.804328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.804426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.804453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.804580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.804606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.804743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.804769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.804898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.804925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.805044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.805071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.805169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.805195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.805354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.805381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.805481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.805509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.805618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.805645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.805742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.805769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.805905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.805945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.806068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.806115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.806216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.806245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.806365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.806392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.806500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.806528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.806675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.806727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.806856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.806886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.807023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.807050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.807149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.807176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.807305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.807333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.807431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.807457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.903 qpair failed and we were unable to recover it. 00:25:04.903 [2024-07-16 01:18:20.807549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.903 [2024-07-16 01:18:20.807591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.807686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.807713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.807874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.807914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.808059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.808089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.808196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.808223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.808359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.808386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.808519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.808546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.808672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.808699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.808800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.808826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.808921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.808947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.809059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.809087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.809186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.809213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.809310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.809343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.809454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.809494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.809610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.809639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.809800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.809840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.809963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.809992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.810103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.810130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.810255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.810282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.810413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.810448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.810573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.810598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.810710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.810738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.810830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.810859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.810991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.811031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.811143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.811170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.811277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.811303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.811417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.811444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.811550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.811579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.811685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.811715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.811856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.811883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.812964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.812993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.813132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.813162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.813292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.813331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.813436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.813469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.904 qpair failed and we were unable to recover it. 00:25:04.904 [2024-07-16 01:18:20.813591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.904 [2024-07-16 01:18:20.813617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.813711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.813737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.813840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.813866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.813967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.813995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.814100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.814129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.814231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.814258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.814389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.814416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.814512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.814540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.814647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.814676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.814771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.814799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.814908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.814952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.815080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.815107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.815238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.815265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.815374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.815401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.815501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.815529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.815629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.815657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.815765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.815793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.815880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.815906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.816071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.816197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.816325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.816453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.816583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.816717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.816844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.816968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.817008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.817121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.817166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.817286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.817314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.817445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.817472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.817593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.817618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.817747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.817784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.817882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.817910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.818038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.818078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.818186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.818213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.818312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.818348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.818447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.818473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.818588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.818616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.818713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.905 [2024-07-16 01:18:20.818740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:04.905 qpair failed and we were unable to recover it. 00:25:04.905 [2024-07-16 01:18:20.818842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.818870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.818994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.819138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.819262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.819384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.819514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.819650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.819792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.819924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.819951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.820092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.820215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.820353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.820473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.820601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.820729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.820861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.820976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.821113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.821239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.821393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.821513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.821640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.821768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.821889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.821918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.822036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.822063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.822168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.822194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.822296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.822321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.822454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.822479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.822588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.822618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.822714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.822740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.188 [2024-07-16 01:18:20.822833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.188 [2024-07-16 01:18:20.822862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.188 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.822983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.823132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.823278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.823401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.823524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.823659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.823806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.823929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.823969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.824069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.824094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.824194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.824233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.824332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.824359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.824464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.824492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.824595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.824622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.824741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.824781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.824911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.824941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.825047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.825075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.825179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.825205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.825307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.825344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.825452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.825478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.825597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.825624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.825744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.825784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.825916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.825944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.826051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.826077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.826178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.826203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.826341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.826368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.826466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.826493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.826623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.826652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.826752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.826789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.826920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.826947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.827086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.827238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.827365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.827493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.827614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.827742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.827862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.827980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.828022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.189 qpair failed and we were unable to recover it. 00:25:05.189 [2024-07-16 01:18:20.828144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.189 [2024-07-16 01:18:20.828187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.828332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.828361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.828460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.828494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.828585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.828612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.828705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.828732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.828826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.828852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.828989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.829031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.829146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.829176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.829285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.829319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.829415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.829442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.829584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.829622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.829724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.829751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.829843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.829871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.829999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.830028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.830168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.830208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.830308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.830336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.830471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.830497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.830591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.830617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.830715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.830743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.830843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.830868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.830995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.831136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.831267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.831390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.831554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.831678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.831842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.831968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.831999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.832120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.832146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.832238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.832264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.832355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.832382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.832491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.832518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.832616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.832642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.832746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.832771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.832894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.832920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.833026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.833056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.833151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.833178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.833283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.833309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.833407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.833433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.833524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.833551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.833657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.190 [2024-07-16 01:18:20.833686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.190 qpair failed and we were unable to recover it. 00:25:05.190 [2024-07-16 01:18:20.833793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.833821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.833909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.833934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.834936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.834971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.835067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.835093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.835214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.835241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.835331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.835358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.835505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.835545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.835648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.835676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.835786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.835838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.835947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.835982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.836085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.836113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.836213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.836240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.836360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.836388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.836518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.836546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.836641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.836668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.836794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.836823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.836942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.836997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.837131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.837159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.837255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.837281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.837370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.837395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.837495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.837531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.837635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.837663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.837764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.837792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.837894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.837922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.838092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.838212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.838341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.838461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.838606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.838761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.838882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.838980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.839008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.839110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.839139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.839287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.191 [2024-07-16 01:18:20.839314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.191 qpair failed and we were unable to recover it. 00:25:05.191 [2024-07-16 01:18:20.839417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.839444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.839569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.839602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.839701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.839731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.839828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.839855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.839963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.840110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.840234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.840363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.840519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.840649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.840782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.840949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.840983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.841112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.841144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.841238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.841266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.841389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.841415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.841536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.841563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.841662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.841689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.841790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.841819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.841918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.841944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.842113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.842236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.842367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.842482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.842609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.842734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.842860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.842972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.843000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.843101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.843128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.843230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.843258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.843381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.843409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.843500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.843525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.843627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.843653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.192 [2024-07-16 01:18:20.843751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.192 [2024-07-16 01:18:20.843776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.192 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.843878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.843906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.844051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.844185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.844336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.844458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.844585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.844741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.844873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.844974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.845001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.845128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.845155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.845254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.845281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.845409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.845437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.845555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.845581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.845710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.845735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.845836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.845863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.845988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.846109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.846231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.846422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.846536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.846670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.846803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.846929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.846962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.847090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.847223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.847346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.847476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.847591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.847717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.847866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.847973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.848097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.848249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.848372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.848501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.848630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.848751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.848874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.848900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.849046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.849072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.849195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.849220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.193 qpair failed and we were unable to recover it. 00:25:05.193 [2024-07-16 01:18:20.849315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.193 [2024-07-16 01:18:20.849341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.849435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.849462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.849555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.849581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.849685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.849710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.849838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.849863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.849966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.849991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.850116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.850156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.850300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.850340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.850451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.850480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.850577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.850606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.850703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.850730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.850859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.850887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.850983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.851012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.851157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.851184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.851283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.851310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.851414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.851441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.851581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.851609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.851736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.851763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.851870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.851898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.852000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.852030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.852128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.852154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.852300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.852326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.852448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.852473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.852595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.852621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.852714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.852743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.852886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.852917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.853968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.853994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.854088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.854113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.854210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.854236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.854324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.854349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.854443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.854469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.854619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.854646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.854738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.854764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.194 qpair failed and we were unable to recover it. 00:25:05.194 [2024-07-16 01:18:20.854885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.194 [2024-07-16 01:18:20.854912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.855074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.855200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.855326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.855444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.855591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.855711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.855857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.855973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.856148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.856304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.856427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.856544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.856665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.856793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.856920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.856951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.857066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.857094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.857220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.857247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.857373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.857400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.857493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.857520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.857649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.857676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.857796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.857823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.857918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.857946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.858056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.858084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.858207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.858234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.858358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.858385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.858514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.858543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.858641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.858670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.858777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.858804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.858907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.858934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.859967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.859994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.860091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.860117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.860219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.860246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.860340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.195 [2024-07-16 01:18:20.860366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.195 qpair failed and we were unable to recover it. 00:25:05.195 [2024-07-16 01:18:20.860470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.860499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.860598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.860627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.860723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.860749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.860877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.860903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.861966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.861993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.862090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.862116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.862214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.862240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.862386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.862412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.862534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.862560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.862656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.862682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.862782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.862807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.862900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.862927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.863942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.863975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.864080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.864106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.864198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.864224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.864315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.864340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.864460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.864486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.864615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.864641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.864759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.864784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.864889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.864916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.196 [2024-07-16 01:18:20.865018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.196 [2024-07-16 01:18:20.865044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.196 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.865137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.865163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.865280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.865306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.865397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.865423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.865518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.865543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.865639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.865666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.865761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.865787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.865910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.865936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.866043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.866069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.866163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.866190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.866282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.866307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.866405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.866431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.866543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.866583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.866692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.866721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.866873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.866901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.867972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.867997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.868110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.868230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.868351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.868503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.868631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.868755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.868885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.868987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.869141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.869270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.869434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.869592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.869719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.869837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.869952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.197 [2024-07-16 01:18:20.869983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.197 qpair failed and we were unable to recover it. 00:25:05.197 [2024-07-16 01:18:20.870079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.870105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.870278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.870304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.870398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.870423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.870545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.870572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.870671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.870697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.870797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.870823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.870919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.870945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.871960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.871996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.872089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.872116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.872242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.872270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.872370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.872398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.872516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.872545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.872645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.872673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.872797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.872823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.872925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.872950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.873946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.873983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.874108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.874134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.874252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.874277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.874397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.874423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.874522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.874547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.874672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.874699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.874793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.874820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.874941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.874974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.875097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.875122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.875211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.875237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.875360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.875385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.875477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.198 [2024-07-16 01:18:20.875503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.198 qpair failed and we were unable to recover it. 00:25:05.198 [2024-07-16 01:18:20.875631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.875657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.875752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.875777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.875892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.875932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.876941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.876973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.877098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.877124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.877247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.877273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.877377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.877405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.877512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.877540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.877636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.877662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.877787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.877813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.877902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.877928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.878091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.878243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.878365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.878486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.878624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.878751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.878870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.878993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.879126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.879245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.879369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.879485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.879600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.879739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.879923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.879975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.880111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.880139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.880235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.880261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.880362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.880388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.880490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.880516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.880637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.880665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.880776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.880807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.880911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.880940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.881071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.881105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.199 qpair failed and we were unable to recover it. 00:25:05.199 [2024-07-16 01:18:20.881235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.199 [2024-07-16 01:18:20.881263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.881395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.881422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.881515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.881542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.881667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.881694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.881815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.881842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.881937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.881978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.882083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.882109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.882202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.882229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.882327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.882353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.882505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.882531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.882631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.882657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.882754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.882783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.882912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.882951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.883075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.883103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.883204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.883231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.883351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.883376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.883476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.883502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.883597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.883625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.883721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.883750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.883850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.883878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.884004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.884034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.884163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.884189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.884313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.884338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.884463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.884489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.884582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.884608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.884700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.884726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.884815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.884845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.885891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.885916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.886022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.886049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.886174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.886199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.886292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.886319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.886417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.886443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.886571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.886600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.886705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.886731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.200 qpair failed and we were unable to recover it. 00:25:05.200 [2024-07-16 01:18:20.886822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.200 [2024-07-16 01:18:20.886847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.886942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.886980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.887099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.887125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.887249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.887275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.887368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.887395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.887520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.887545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.887636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.887662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.887782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.887808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.887907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.887933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.888055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.888096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.888193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.888222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.888346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.888373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.888468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.888496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.888592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.888618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.888718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.888744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.888864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.888889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.889908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.889935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.890044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.890072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.890171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.890204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.890303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.890332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.890456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.890484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.890638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.890664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.890763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.890789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.890907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.890933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.891033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.891058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.891158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.891186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.891294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.891322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.891420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.891448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.201 qpair failed and we were unable to recover it. 00:25:05.201 [2024-07-16 01:18:20.891553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.201 [2024-07-16 01:18:20.891581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.891708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.891735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.891833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.891860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.891984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.892109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.892236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.892383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.892534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.892659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.892781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.892905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.892932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.893095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.893249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.893374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.893498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.893615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.893745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.893872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.893974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.894092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.894237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.894349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.894465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.894586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.894741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.894859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.894884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.895965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.895991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.896090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.896117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.896215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.896240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.896386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.896413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.896505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.896530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.896629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.896664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.896760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.896787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.896888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.202 [2024-07-16 01:18:20.896914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.202 qpair failed and we were unable to recover it. 00:25:05.202 [2024-07-16 01:18:20.897040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.897067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.897162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.897187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.897310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.897336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.897462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.897490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.897586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.897612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.897741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.897781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.897922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.897949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.898080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.898106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.898204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.898231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.898326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.898352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.898467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.898493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.898596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.898622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.898730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.898771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.898901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.898930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.899968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.899993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.900093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.900119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.900241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.900266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.900386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.900412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.900515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.900542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.900671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.900698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.900849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.900875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.900974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.901101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.901226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.901349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.901517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.901668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.901795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.901926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.901973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.902082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.902110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.902240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.902269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.902373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.902402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.902502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.902542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.902649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.203 [2024-07-16 01:18:20.902676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.203 qpair failed and we were unable to recover it. 00:25:05.203 [2024-07-16 01:18:20.902775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.902802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.902897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.902923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.903934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.903974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.904075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.904101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.904198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.904226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.904328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.904355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.904482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.904510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.904605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.904632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.904744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.904783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.904896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.904924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.905934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.905970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.906064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.906091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.906184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.906211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.906310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.906339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.906462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.906490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.906605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.906646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.906783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.906811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.906904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.906932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.907044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.907071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.907171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.907197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.907295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.907322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.907440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.907466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.907562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.907590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.907716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.907744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.907874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.907902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.908009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.908037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.908125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.204 [2024-07-16 01:18:20.908152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.204 qpair failed and we were unable to recover it. 00:25:05.204 [2024-07-16 01:18:20.908260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.908290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.908414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.908449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.908541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.908567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.908687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.908712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.908814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.908840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.908952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.909010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.909141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.909170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.909321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.909347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.909443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.909471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.909592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.909620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.909779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.909808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.909903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.909933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.910934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.910964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.911103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.911222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.911370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.911498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.911613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.911728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.911874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.911990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.912019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.912120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.912153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.912254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.912282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.912381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.912410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.912537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.912564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.205 [2024-07-16 01:18:20.912659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.205 [2024-07-16 01:18:20.912687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.205 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.912785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.912811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.912921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.912971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.913093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.913122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.913245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.913273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.913368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.913395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.913520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.913547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.913639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.913666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.913777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.913818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.913949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.913984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.914122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.914153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.914258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.914284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.914381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.914407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.914501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.914527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.914625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.914653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.914778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.914805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.914905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.914933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.915047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.915076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.915170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.915196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.915303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.915329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.915454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.915480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.915605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.915631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.915728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.915754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.915879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.915908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.916047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.916185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.916322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.916474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.916637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.916772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.916892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.916998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.917028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.917133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.917158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.917274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.917315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.917416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.917444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.917572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.917602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.917696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.917727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.917823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.917849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.917968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.918003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.918097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.918126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.206 [2024-07-16 01:18:20.918220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.206 [2024-07-16 01:18:20.918258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.206 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.918363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.918392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.918501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.918530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.918631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.918662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.918791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.918818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.918940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.918973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.919068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.919095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.919195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.919224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.919325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.919353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.919455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.919482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.919594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.919622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.919753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.919782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.919880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.919907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.920913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.920940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.921079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.921107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.921227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.921265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.921371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.921405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.921506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.921533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.921652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.921679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.921778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.921804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.921896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.921923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.922961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.922999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.923095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.923121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.923227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.923261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.923353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.923379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.923476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.923503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.207 [2024-07-16 01:18:20.923622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.207 [2024-07-16 01:18:20.923648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.207 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.923749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.923778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.923881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.923911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.924912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.924940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.925048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.925074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.925172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.925199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.925323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.925348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.925439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.925465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.925569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.925598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.925719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.925758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.925887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.925915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.926937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.926971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.927105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.927133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.927232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.927265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.927387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.927414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.927541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.927570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.927674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.927703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.927833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.927859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.927979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.928016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.928140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.928166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.928270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.928296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.928391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.928417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.928515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.928541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.928694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.208 [2024-07-16 01:18:20.928721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.208 qpair failed and we were unable to recover it. 00:25:05.208 [2024-07-16 01:18:20.928819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.928848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.928950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.928986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.929109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.929237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.929357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.929482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.929610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.929739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.929864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.929969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.930005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.930128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.930155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.930257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.930287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.930381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.930411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.930506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.930532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.930662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.930688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.930798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.930839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.930982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.931112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.931263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.931392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.931521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.931655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.931787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.931908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.931934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.932085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.932113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.932219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.932246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.932352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.932382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.932477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.932505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.932600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.932628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.932749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.932777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.932914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.932953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.933067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.933095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.933191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.933216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.933314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.933342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.933441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.209 [2024-07-16 01:18:20.933467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.209 qpair failed and we were unable to recover it. 00:25:05.209 [2024-07-16 01:18:20.933598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.933625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.933718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.933743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.933834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.933861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.933968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.933994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.934087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.934116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.934218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.934246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.934344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.934372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.934494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.934522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.934620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.934648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.934737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.934765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.934887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.934914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.935902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.935930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.936034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.936061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.936162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.210 [2024-07-16 01:18:20.936189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.210 qpair failed and we were unable to recover it. 00:25:05.210 [2024-07-16 01:18:20.936285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.936312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.936440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.936468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.936562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.936590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.936715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.936745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.936877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.936905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.937935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.937969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.938074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.938101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.938198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.938226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.938318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.938345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.938464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.938492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.938595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.938622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.938718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.938744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.938882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.938923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.939029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.939058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.939165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.939193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.939291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.939318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.939447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.939474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.939607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.939647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.939757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.939786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.939885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.939913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.940069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.940096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.940198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.940226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.940322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.940349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.940449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.940476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.940571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.940598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.940716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.940756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.940890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.940920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.941049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.941076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.941179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.941206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.941308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.941335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.941470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.941497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.941627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.941655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.941745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.941772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.211 qpair failed and we were unable to recover it. 00:25:05.211 [2024-07-16 01:18:20.941883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.211 [2024-07-16 01:18:20.941923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.942055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.942084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.942231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.942258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.942380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.942407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.942524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.942551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.942656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.942683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.942784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.942811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.942903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.942930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.943064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.943193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.943323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.943445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.943572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.943723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.943871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.943982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.944127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.944278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.944409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.944529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.944648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.944779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.944919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.944967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.945069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.945102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.945235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.945262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.945354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.945380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.945499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.945526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.945618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.945644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.945765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.945792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.212 [2024-07-16 01:18:20.945885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.212 [2024-07-16 01:18:20.945912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.212 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.946903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.946932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.947068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.947097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.947197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.947224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.947320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.947346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.947441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.947468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.947579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.947620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.947754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.947783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.947886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.947913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.948912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.948939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.949069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.949197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.949342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.949460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.949587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.949741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.949871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.949978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.950008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.950107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.950135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.950254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.950281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.950407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.213 [2024-07-16 01:18:20.950434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.213 qpair failed and we were unable to recover it. 00:25:05.213 [2024-07-16 01:18:20.950529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.950556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.950681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.950708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.950802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.950830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.950953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.950989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.951110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.951136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.951232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.951259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.951350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.951376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.951477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.951503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.951592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.951618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.951740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.951768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.951894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.951920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.952037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.952067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.952166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.952193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.952324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.952356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.952480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.952507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.952603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.952633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.952766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.952807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.952950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.952985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.953090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.953116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.953208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.953234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.953340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.953367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.953484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.953511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.953612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.953642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.953745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.953772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.953896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.953923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.954024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.954052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.954158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.954186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.954290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.954317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.954416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.954445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.214 qpair failed and we were unable to recover it. 00:25:05.214 [2024-07-16 01:18:20.954536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.214 [2024-07-16 01:18:20.954563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.954662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.954690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.954788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.954816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.954914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.954940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.955062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.955089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.955188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.955216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.955305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.955332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.955470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.955510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.955620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.955650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.955790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.955830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.955971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.956111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.956241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.956395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.956525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.956650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.956810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.956966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.956994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.957094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.957121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.957223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.957250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.957345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.957372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.957470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.957498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.957597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.957625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.215 qpair failed and we were unable to recover it. 00:25:05.215 [2024-07-16 01:18:20.957728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.215 [2024-07-16 01:18:20.957768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.957875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.957910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.958964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.958992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.959121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.959148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.959243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.959270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.959362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.959389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.959511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.959538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.959640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.959666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.959774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.959800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.959900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.959928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.960941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.960979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.961088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.961115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.961221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.961248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.961348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.961375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.961492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.961524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.216 [2024-07-16 01:18:20.961630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.216 [2024-07-16 01:18:20.961657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.216 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.961778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.961805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.961900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.961927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.962062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.962191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.962308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.962460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.962601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.962724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.962856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.962974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.963127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.963270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.963398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.963520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.963643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.963786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.963906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.963932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.964950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.964986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.965103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.965143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.965291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.965318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.965414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.965439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.965556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.965582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.965705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.965730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.217 [2024-07-16 01:18:20.965823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.217 [2024-07-16 01:18:20.965850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.217 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.965947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.965983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.966092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.966132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.966233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.966262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.966368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.966397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.966501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.966529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.966646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.966674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.966775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.966802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.966921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.966962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.967064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.967091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.967196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.967222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.967343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.967370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.967466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.967492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.967631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.967659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.967758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.967785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.967907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.967933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.968934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.968969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.969074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.969100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.969199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.969226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.969351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.969378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.969475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.969501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.969594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.969620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.969739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.969780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.969909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.969938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.970041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.970069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.218 [2024-07-16 01:18:20.970166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.218 [2024-07-16 01:18:20.970193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.218 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.970292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.970319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.970408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.970435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.970537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.970569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.970675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.970701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.970792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.970819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.970965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.970993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.971092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.971119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.971221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.971249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.971367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.971394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.971493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.971520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.971628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.971667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.971768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.971797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.971912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.971953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.972073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.972104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.972204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.972231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.972337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.972366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.972467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.972495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.972623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.972653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.972793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.972834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.972945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.972981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.973103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.973131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.973230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.973258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.973358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.973387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.973482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.973510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.973607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.973636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.973741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.973783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.973890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.973919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.974050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.974177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.974298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.974428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.974599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.974715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.974897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.974999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.975028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.975128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.975155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.975254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.975280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.975380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.975407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.975528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.975555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.219 [2024-07-16 01:18:20.975653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.219 [2024-07-16 01:18:20.975680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.219 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.975775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.975802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.975897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.975924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.976954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.976986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.977091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.977118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.977216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.977243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.977348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.977375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.977472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.977498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.977591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.977618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.977706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.977734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.977863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.977890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.978915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.978942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.979044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.979072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.979169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.979195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.979287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.979314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.979444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.979471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.979570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.220 [2024-07-16 01:18:20.979602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.220 qpair failed and we were unable to recover it. 00:25:05.220 [2024-07-16 01:18:20.979724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.979750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.979847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.979874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.979976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.980879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.980976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.981103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.981227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.981361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.981483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.981638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.981787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.981910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.981938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.982052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.982086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.982176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.982203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.982348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.982374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.982474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.982501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.221 qpair failed and we were unable to recover it. 00:25:05.221 [2024-07-16 01:18:20.982611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.221 [2024-07-16 01:18:20.982653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.982767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.982808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.982915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.982943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.983078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.983214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.983398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.983530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.983652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.983771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.983891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.983994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.984022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.984122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.984149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.984242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.984269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.984375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.984403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.984524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.984551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.984649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.984676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.984800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.984827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.984942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.985119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.985244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.985379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.985500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.985636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.985784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.985931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.985963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.986069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.986097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.986194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.986221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.986318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.986345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.986442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.986469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.986586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.222 [2024-07-16 01:18:20.986613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.222 qpair failed and we were unable to recover it. 00:25:05.222 [2024-07-16 01:18:20.986716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.986744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.986853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.986881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.986975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.987008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.987131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.987158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.987252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.987278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.987405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.987432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.987528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.987555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.987659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.987687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.987809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.987836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.987948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.988110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.988242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.988369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.988490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.988626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.988773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.988901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.988931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.989085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.989217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.989362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.989489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.989613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.989743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.989876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.989983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.990120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.990241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.990398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.990530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.990677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.990819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.990960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.990989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.991091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.223 [2024-07-16 01:18:20.991119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.223 qpair failed and we were unable to recover it. 00:25:05.223 [2024-07-16 01:18:20.991216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.991243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.991341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.991369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.991468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.991496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.991588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.991616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.991711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.991738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.991834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.991862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.991969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.991997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.992117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.992257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.992394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.992510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.992635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.992764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.992884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.992987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.993025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.993123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.993150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.993285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.993325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.993461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.993490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.993609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.993637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.993736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.993764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.993852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.993880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.993978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.994144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.994293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.994416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.994565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.994713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.994836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.994968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.994997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.224 qpair failed and we were unable to recover it. 00:25:05.224 [2024-07-16 01:18:20.995093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.224 [2024-07-16 01:18:20.995120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.995208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.995234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.995336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.995363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.995459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.995487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.995623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.995664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.995762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.995789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.995897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.995923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.996059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.996212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.996354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.996475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.996592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.996714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.996861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.996973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.997108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.997235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.997363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.997519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.997649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.997800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.997925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.997953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.998054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.998082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.998183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.998211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.998335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.998363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.998457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.998485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.998605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.998632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.998738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.998765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.998853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.998880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.999007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.999036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.999156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.999185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.999338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.999365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.999496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.999529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.999652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.999680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.999782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.999809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:20.999908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:20.999935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:21.000035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:21.000063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:21.000155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:21.000182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:21.000279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:21.000306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:21.000456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:21.000483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:21.000581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.225 [2024-07-16 01:18:21.000608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.225 qpair failed and we were unable to recover it. 00:25:05.225 [2024-07-16 01:18:21.000712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.000740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.000866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.000894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.000991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.001112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.001235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.001359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.001477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.001599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.001780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.001905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.001932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.002044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.002085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.002202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.002243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.002346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.002375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.002504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.002531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.002626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.002653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.002794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.002834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.002938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.002973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.003106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.003237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.003386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.003533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.003653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.003775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.003891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.003988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.004877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.004975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.005003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.226 qpair failed and we were unable to recover it. 00:25:05.226 [2024-07-16 01:18:21.005093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.226 [2024-07-16 01:18:21.005121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.005217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.005244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.005370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.005395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.005482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.005509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.005613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.005638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.005724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.005750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.005851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.005880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.005980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.006893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.006986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.007115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.007243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.007365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.007515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.007635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.007762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.007885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.007912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.008944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.008980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.009081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.009110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.009205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.009232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.009371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.009412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.009516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.009544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.009662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.009689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.009786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.009812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.009909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.009935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.010036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.010067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.010159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.010184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.010306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.227 [2024-07-16 01:18:21.010334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.227 qpair failed and we were unable to recover it. 00:25:05.227 [2024-07-16 01:18:21.010433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.010459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.010563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.010592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.010701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.010742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.010854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.010883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.010984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.011114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.011245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.011367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.011502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.011626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.011752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.011919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.011945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.012075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.012101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.012197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.012223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.012318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.012344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.012443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.012469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.012568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.012595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.012693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.012723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.012848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.012875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.013952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.013987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.014091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.014118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.014218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.014248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.014346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.014373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.014473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.014500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.014625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.014653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.014774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.014803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.014897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.014926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.015068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.015192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.015367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.015498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.015620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.015749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.228 [2024-07-16 01:18:21.015870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.228 qpair failed and we were unable to recover it. 00:25:05.228 [2024-07-16 01:18:21.015968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.015996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.016124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.016247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.016399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.016523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.016643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.016767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.016888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.016988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.017114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.017255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.017379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.017503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.017631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.017763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.017894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.017935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.018056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.018085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.018184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.018211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.018303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.018329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.018451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.018477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.018575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.018600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.018729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.018758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.018855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.018886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.019944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.019979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.020076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.020103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.020208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.020235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.020357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.020384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.020566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.020593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.020685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.020711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.020811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.020838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.020950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.021000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.021108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.021137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.021240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.021268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.021388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.229 [2024-07-16 01:18:21.021416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.229 qpair failed and we were unable to recover it. 00:25:05.229 [2024-07-16 01:18:21.021514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.021541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.021639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.021667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.021789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.021816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.021914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.021943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.022055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.022084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.022186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.022214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.022352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.022379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.022501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.022528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.022622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.022650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.022770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.022801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.022898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.022926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.023059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.023212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.023337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.023480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.023598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.023720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.023845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.023977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.024131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.024258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.024376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.024501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.024627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.024760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.024887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.024915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.025936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.025969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.026068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.230 [2024-07-16 01:18:21.026098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.230 qpair failed and we were unable to recover it. 00:25:05.230 [2024-07-16 01:18:21.026197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.026227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.026331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.026359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.026482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.026510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.026632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.026660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.026753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.026780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.026882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.026909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.027061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.027191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.027317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.027450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.027571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.027695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.027825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.027964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.028004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.028118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.028146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.028255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.028282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.028402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.028428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.028521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.028548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.028693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.028719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.028815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.028841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.028983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.029130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.029264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.029391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.029507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.029654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.029790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.029941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.029995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.030265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.030387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.030510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.030646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.030771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.030892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.030993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.031020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.031118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.031144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.031243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.031270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.031378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.031408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.031547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.031574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.031669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.231 [2024-07-16 01:18:21.031695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.231 qpair failed and we were unable to recover it. 00:25:05.231 [2024-07-16 01:18:21.031797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.031827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.031932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.031968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.032068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.032093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.032213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.032238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.032357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.032382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.032484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.032511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.032611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.032637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.032739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.032770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.032878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.032906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.033041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.033069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.033163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.033190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.033317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.033344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.033438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.033465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.033588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.033616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.033710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.033736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.033856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.033882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.034882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.034910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.035099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.035129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.035250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.035278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.035371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.035403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.035495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.035522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.035649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.035676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.035794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.035822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.035913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.035941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.036041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.036069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.036171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.036198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.036295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.036323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.036447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.036476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.036597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.036625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.036747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.036775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.036874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.036904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.037004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.037030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.037131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.037158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.232 [2024-07-16 01:18:21.037253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.232 [2024-07-16 01:18:21.037278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.232 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.037375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.037402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.037526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.037551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.037651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.037677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.037769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.037795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.037943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.037991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.038094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.038124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.038226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.038254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.038349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.038377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.038479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.038508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.038605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.038632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.038762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.038790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.038884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.038913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.039035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.039076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.039182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.039210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.039338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.039366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.039454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.039479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.039672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.039699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.039823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.039849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.039964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.040113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.040235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.040355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.040482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.040613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.040738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.040862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.040896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.041943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.041977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.042100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.042126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.042218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.042244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.042340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.042366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.042484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.042509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.042630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.042656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.042781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.042808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.233 [2024-07-16 01:18:21.042909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.233 [2024-07-16 01:18:21.042935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.233 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.043034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.043061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.043166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.043192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.043289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.043316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.043446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.043473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.043597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.043623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.043720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.043746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.043866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.043893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.044044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.044166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.044286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.044406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.044559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.044733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.044892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.044994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.045144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.045267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.045396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.045524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.045651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.045778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.045967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.045993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.046089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.046114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.046209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.046234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.046358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.046384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.046490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.046519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.046639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.046665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.046753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.046779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.046877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.046903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.047033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.047184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.047309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.047461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.047608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.047761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.047889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.047988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.048015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.048109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.048136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.048239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.048265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.048389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.048415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.234 qpair failed and we were unable to recover it. 00:25:05.234 [2024-07-16 01:18:21.048507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.234 [2024-07-16 01:18:21.048533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.048626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.048652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.048753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.048779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.048870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.048896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.049961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.049992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.050118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.050145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.050241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.050267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.050389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.050415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.050511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.050536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.050658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.050684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.050779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.050807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.050917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.050964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.051074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.051103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.051204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.051232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.051326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.051353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.051451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.051480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.051611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.051639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.051759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.051787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.051891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.235 [2024-07-16 01:18:21.051918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.235 qpair failed and we were unable to recover it. 00:25:05.235 [2024-07-16 01:18:21.052023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.052143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.052274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.052394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.052517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.052641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.052807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.052936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.052970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.053065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.053091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.053191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.053218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.053317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.053344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.053502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.053528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.053663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.053690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.053780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.053806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.053914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.053963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.054066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.054094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.054189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.054217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.054320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.054347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.054477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.054504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.054628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.054655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.054782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.054810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.054910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.054937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.055911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.055937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.056939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.056974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.057070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.057097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.057204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.057231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.236 qpair failed and we were unable to recover it. 00:25:05.236 [2024-07-16 01:18:21.057357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.236 [2024-07-16 01:18:21.057384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.057480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.057508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.057634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.057660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.057760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.057788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.057886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.057913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.058904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.058931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.059050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.059091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.059197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.059224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.059321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.059347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.059475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.059501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.059652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.059678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.059795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.059836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.059967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.059996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.060089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.060116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.060238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.060265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.060364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.060393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.060522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.060550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.060651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.060679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.060770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.060801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.060899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.060924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.061060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.061187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.061310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.061436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.061590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.061718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.061880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.061987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.062016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.062137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.062164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.062257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.062285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.062386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.062413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.062506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.062533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.062638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.062667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.062770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.237 [2024-07-16 01:18:21.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.237 qpair failed and we were unable to recover it. 00:25:05.237 [2024-07-16 01:18:21.062924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.062952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.063060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.063088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.063210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.063237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.063337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.063364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.063468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.063497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.063626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.063655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.063780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.063807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.063907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.063937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.064046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.064074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.064201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.064228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.064328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.064356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.064486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.064515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.064615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.064642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.064742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.064770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.064863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.064889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.065892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.065918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.066968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.066995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.067119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.067146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.067248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.067275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.067376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.067404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.067501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.067530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.067630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.067656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.067749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.067775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.067869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.067896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.068027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.068054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.068152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.068179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.238 [2024-07-16 01:18:21.068278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.238 [2024-07-16 01:18:21.068305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.238 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.068406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.068433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.068532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.068560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.068659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.068687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.068810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.068837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.068968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.068996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.069091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.069118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.069213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.069241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.069365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.069393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.069491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.069519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.069615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.069643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.069748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.069775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.069873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.069902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.070040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.070081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.070186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.070213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.070322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.070349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.070454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.070482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.070573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.070600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.070720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.070747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.070867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.070894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.071924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.071950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.072049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.072075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.072164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.072190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.072318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.072343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.072437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.072463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.072556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.072582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.072682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.072712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.239 qpair failed and we were unable to recover it. 00:25:05.239 [2024-07-16 01:18:21.072813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.239 [2024-07-16 01:18:21.072840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.072937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.072974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.073097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.073124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.073218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.073245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.073355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.073382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.073478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.073504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.073634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.073665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.073797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.073825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.073927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.073967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.074100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.074128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.074227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.074255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.074347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.074374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.074468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.074495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.074617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.074644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.074783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.074812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.074916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.074944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.075079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.075208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.075356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.075484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.075636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.075766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.075887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.075986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.076106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.076224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.076343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.076467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.076600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.076754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.076882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.076913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.077901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.077998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.078025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.078128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.078155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.240 qpair failed and we were unable to recover it. 00:25:05.240 [2024-07-16 01:18:21.078253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.240 [2024-07-16 01:18:21.078279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.078378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.078405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.078499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.078524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.078625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.078653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.078753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.078781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.078906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.078934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.079035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.079062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.079179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.079207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.079301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.079328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.079425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.079453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.079579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.079609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.079734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.079763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.079863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.079891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.080966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.080993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.081083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.081110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.081235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.081271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.081369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.081397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.081517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.081545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.081664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.081692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.081816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.081843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.081968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.082132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.082260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.082396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.082525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.082650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.082798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.082921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.082946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.083058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.083085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.083211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.083237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.083336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.083363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.083468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.083495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.083589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.083615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.083738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.241 [2024-07-16 01:18:21.083764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.241 qpair failed and we were unable to recover it. 00:25:05.241 [2024-07-16 01:18:21.083860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.083885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.083996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.084025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.084157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.084197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.084305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.084334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.084466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.084494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.084646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.084673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.084777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.084804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.084904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.084931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.085065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.085095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.085201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.085229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.085336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.085364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.085489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.085516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.085611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.085638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.085735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.085763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.085914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.085942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.086064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.086104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.086247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.086287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.086390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.086417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.086519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.086546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.086665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.086691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.086789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.086816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.086913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.086941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.087068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.087095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.087187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.087213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.087333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.087360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.087485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.087511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.087614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.087641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.087747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.087773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.087871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.087901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.088971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.088997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.089097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.089124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.089221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.089247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.089339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.242 [2024-07-16 01:18:21.089365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.242 qpair failed and we were unable to recover it. 00:25:05.242 [2024-07-16 01:18:21.089485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.089510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.089610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.089636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.089733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.089759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.089902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.089943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.090085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.090214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.090340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.090458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.090594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.090742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.090863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.090991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.091115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.091246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.091374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.091528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.091678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.091798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.091943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.091984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.092089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.092115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.092211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.092237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.092331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.092357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.092477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.092504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.092603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.092629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.092748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.092774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.092906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.092946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.093070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.093100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.093198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.093226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.093324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.267 [2024-07-16 01:18:21.093356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.267 qpair failed and we were unable to recover it. 00:25:05.267 [2024-07-16 01:18:21.093452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.093479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.093582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.093610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.093708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.093735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.093845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.093886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.093999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.094131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.094254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.094376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.094502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.094619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.094743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.094866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.094893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.095898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.095926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.096066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.096190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.096340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.096461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.096590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.096719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.096841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.096992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.097020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.097141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.097169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.097283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.097323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.097431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.097459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.097585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.097612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.097737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.097763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.268 qpair failed and we were unable to recover it. 00:25:05.268 [2024-07-16 01:18:21.097884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.268 [2024-07-16 01:18:21.097925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.098053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.098093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.098201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.098229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.098331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.098360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.098490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.098518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.098613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.098640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.098738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.098767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.098901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.098927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.099971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.099998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.100117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.100143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.100245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.100272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.100366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.100391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.100488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.100514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.100650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.100691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.100799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.100828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.100932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.100966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.101065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.101093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.101243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.101270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.101378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.101405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.101496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.101523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.101625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.101652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.101751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.101778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.101901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.101928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.102937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.102971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.103082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.103111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.103210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.269 [2024-07-16 01:18:21.103237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.269 qpair failed and we were unable to recover it. 00:25:05.269 [2024-07-16 01:18:21.103363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.103390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.103514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.103541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.103645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.103672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.103768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.103796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.103925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.103952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.104069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.104097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.104194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.104223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.104365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.104393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.104492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.104521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.104669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.104696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.104802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.104829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.104941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.104976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.105077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.105104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.105228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.105254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.105347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.105373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.105469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.105495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.105610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.105636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.105734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.105762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.105875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.105916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.106078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.106228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.106381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.106504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.106626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.106749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.106873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.106996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.107121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.107244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.107394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.107514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.107634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.107755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.107881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.107907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.108037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.108166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.108299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.108452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.108573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.108700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.270 [2024-07-16 01:18:21.108851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-07-16 01:18:21.108966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.109125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.109250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.109371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.109497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.109615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.109766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.109912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.109941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.110076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.110199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.110355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.110491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.110638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.110757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.110879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.110988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.111016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.111119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.111145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.111272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.111301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.111429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.111456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.111590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.111623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.111725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.111753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.111874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.111901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.112023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.112064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.112173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.112201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.112325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.112353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.112447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.112474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.112565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.112592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.112694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.112721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.112821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.112850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.113912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.113938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.114042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.114070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.114163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.114190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.114336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.114363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.114482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.271 [2024-07-16 01:18:21.114508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-07-16 01:18:21.114601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.114628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.114750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.114776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.114870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.114895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.114987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.115014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.115127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.115168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.115281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.115327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.115432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.115460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.115560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.115589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.115678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.115705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.115833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.115861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.115980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.116009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.116110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.116138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.116283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.116323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.116457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.116486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.116579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.116605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.116732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.116758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.116851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.116877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.116993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.117020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.117115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.117143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.117247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.117274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.117420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.117446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.117560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.117591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.117689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.117718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.117854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.117896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.118912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.118938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.119086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.119116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.119221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.119250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.119378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.272 [2024-07-16 01:18:21.119405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-07-16 01:18:21.119502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.119529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.119658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.119687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.119807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.119834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.119931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.119965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.120080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.120107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.120246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.120286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.120426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.120456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.120555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.120583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.120687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.120715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.120817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.120846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.120972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.121006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.121129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.121157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.121255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.121293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.121399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.121427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.121521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.121551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.121691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.121732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.121836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.121865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.121974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.122003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.122108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.122136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.122238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.122276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.122400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.122427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.122548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.122575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.122694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.122721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.122842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.122872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.122976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.123895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.123992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.124118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.124241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.124404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.124530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.124673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.124834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.124965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.124993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.125102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.273 [2024-07-16 01:18:21.125129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.273 qpair failed and we were unable to recover it. 00:25:05.273 [2024-07-16 01:18:21.125254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.125282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.125378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.125405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.125495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.125522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.125618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.125647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.125746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.125775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.125875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.125902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.126017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.126045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.126170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.126196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.126327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.126354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.126478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.126505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.126630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.126658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.126785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.126812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.126909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.126938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.127063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.127092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.127208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.127249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.127390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.127420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.127521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.127550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.127666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.127694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.127787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.127816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.127914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.127942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.128090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.128218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.128342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.128467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.128615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.128739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.128864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.128978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.129122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.129248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.129375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.129536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.129664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.129793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.129917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.129945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.130055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.130081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.130186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.130215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.130318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.130345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.130448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.130477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.130577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.130604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.130698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.274 [2024-07-16 01:18:21.130725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.274 qpair failed and we were unable to recover it. 00:25:05.274 [2024-07-16 01:18:21.130821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.130847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.130941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.130977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.131086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.131113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.131234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.131261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.131367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.131404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.131520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.131561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.131663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.131692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.131788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.131816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.131939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.131975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.132086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.132115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.132217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.132246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.132375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.132401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.132508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.132534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.132633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.132660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.132782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.132808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.132898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.132926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.133939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.133974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.134103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.134224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.134380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.134503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.134627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.134778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.134906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.134999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.135028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.135160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.135189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.135318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.135346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.135440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.135468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.135565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.135594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.135726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.135753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.135862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.135903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.136045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.136074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.136170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.136197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.136301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.275 [2024-07-16 01:18:21.136328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.275 qpair failed and we were unable to recover it. 00:25:05.275 [2024-07-16 01:18:21.136419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.136446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.136569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.136610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.136719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.136761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.136862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.136893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.137031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.137059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.137162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.137190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.137307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.137335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.137460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.137489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.137596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.137628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.137750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.137778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.137878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.137906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.138934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.138974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.139122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.139148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.139243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.139269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.139369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.139401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.139495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.139521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.139621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.139647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.139758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.139800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.139931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.139968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.140073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.140100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.140224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.140250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.140343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.140370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.140472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.140501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.140621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.140651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.140751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.140777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.140900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.140926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.141036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.141062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.141163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.276 [2024-07-16 01:18:21.141193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.276 qpair failed and we were unable to recover it. 00:25:05.276 [2024-07-16 01:18:21.141302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.141330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.141428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.141456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.141546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.141574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.141680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.141710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.141800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.141827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.141923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.141953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.142083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.142109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.142209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.142234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.142345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.142373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.142466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.142493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.142615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.142641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.142754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.142782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.142885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.142915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.143054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.143089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.143184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.143211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.143320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.143348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.143448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.143476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.143575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.143603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.143718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.143747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.143848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.143878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.144006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.144035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.144166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.144193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.144292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.144320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.144415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.144442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.144537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.144566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.144697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.144726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.144828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.144859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.145949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.145983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.146110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.146139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.146248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.146278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.146381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.146407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.146500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.146528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.146620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.146647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.146746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-07-16 01:18:21.146774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-07-16 01:18:21.146891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.146917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.147079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.147210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.147333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.147491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.147737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.147852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.147964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.148100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.148248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.148382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.148537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.148667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.148798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.148947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.148987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.149086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.149113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.149209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.149236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.149340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.149368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.149518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.149547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.149655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.149683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.149785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.149812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.149935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.149967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.150942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.150974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.151081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.151107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.151200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.151226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.151377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.151404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.151500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.151527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.151628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.151657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.151748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.151775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.151866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.151893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.152012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.152048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.152148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.152179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.152278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-07-16 01:18:21.152305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-07-16 01:18:21.152412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.152439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.152529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.152556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.152666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.152707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.152812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.152841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.152984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.153157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.153297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.153426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.153549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.153688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.153825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.153950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.153982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.154095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.154122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.154217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.154243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.154341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.154368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.154494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.154520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.154646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.154673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.154801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.154830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.154929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.154963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.155090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.155119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.155224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.155262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.155386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.155414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.155518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.155546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.155640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.155668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.155780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.155822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.155924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.155974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.156085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.156112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.156209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.156234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.156339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.156366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.156462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.156489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.156618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.156646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.156744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.156771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.156882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.156923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.157075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.157105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.157206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.157234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.157342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.157369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-07-16 01:18:21.157460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-07-16 01:18:21.157488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.157584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.157612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.157709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.157738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.157856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.157898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.158899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.158925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.159061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.159184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.159317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.159452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.159606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.159745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.159868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.159969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.160102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.160228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.160373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.160527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.160659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.160790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.160915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.160941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.161963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.161993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.162094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.162121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.162224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.162265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.162363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.162391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.162489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.162518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.162608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.162636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.162739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.162767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-16 01:18:21.162859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.570 [2024-07-16 01:18:21.162889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.162987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.163125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.163242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.163359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.163481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.163603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.163730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.163859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.163888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.164902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.164997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.165142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.165260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.165380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.165518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.165651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.165774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.165902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.165930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.166043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.166072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.166173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.166200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.166295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.166324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.166419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.166446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.166579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.166606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.166704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.166732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.166859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.166887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.167001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.167043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.167177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.167205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.167333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.167359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.167476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.167502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.167598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.167625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.167735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.167777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.167878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.167907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.168043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.168075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.168201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.168230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.168326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.168354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.168447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.168477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.168606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.168635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.168740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.168770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.168900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.168927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.169064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.169092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.169194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.169222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.169378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.169406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.169509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.169536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-16 01:18:21.169641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.571 [2024-07-16 01:18:21.169668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.169766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.169794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.169890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.169917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.170968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.170996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.171120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.171146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.171250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.171278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.171373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.171399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.171501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.171527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.171625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.171653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.171750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.171777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.171883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.171923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.172075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.172207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.172343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.172470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.172594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.172722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.172848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.172966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.173108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.173240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.173366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.173499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.173623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.173749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.173864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.173896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.174960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.174990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.175089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.175115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.175212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.175240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.175334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.175361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.175479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.175506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.175599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.175625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.175752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.175779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.175879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.175909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.176011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.572 [2024-07-16 01:18:21.176039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.572 qpair failed and we were unable to recover it. 00:25:05.572 [2024-07-16 01:18:21.176138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.176166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.176294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.176321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.176439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.176465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.176564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.176593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.176690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.176719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.176846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.176872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.176970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.176996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.177116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.177236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.177358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.177491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.177616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.177742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.177898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.177997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.178121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.178245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.178370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.178506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.178660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.178787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.178905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.178932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.179065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.179093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.179203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.179245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.179388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.179419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.179519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.179548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.179646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.179674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.179807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.179837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.179932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.179969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.180938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.180976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.181091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.181120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.181217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.181245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.181369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.181397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.181514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.181542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.181651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.181692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.181816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.181845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.181961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.573 [2024-07-16 01:18:21.181991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.573 qpair failed and we were unable to recover it. 00:25:05.573 [2024-07-16 01:18:21.182092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.182120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.182211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.182239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.182335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.182362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.182457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.182486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.182614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.182645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.182745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.182773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.182902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.182934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.183047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.183074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.183175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.183202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.183300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.183326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.183452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.183479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.183584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.183612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.183716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.183746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.183870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.183897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.184047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.184172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.184322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.184443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.184595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.184717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.184877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.184977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.185096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.185225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.185348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.185467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.185618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.185744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.185920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.185971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.186080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.186108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.186230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.186257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.186353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.186381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.186484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.186511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.186637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.186671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.186791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.186819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.186915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.186943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.187058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.187099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.187203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.187231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.187359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.187386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.187490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.187517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.187619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.187648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.187759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.187801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.187940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.187987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.188088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.188117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.188225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.188253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.188377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.188405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.188501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.188529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.188632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.188658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.188746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.188773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.188895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.188924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.189033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.189061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.189184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.574 [2024-07-16 01:18:21.189211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.574 qpair failed and we were unable to recover it. 00:25:05.574 [2024-07-16 01:18:21.189308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.189335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.189436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.189463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.189563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.189590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.189716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.189745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.189844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.189871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.189970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.189998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.190086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.190113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.190210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.190238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.190338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.190370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.190464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.190491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.190621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.190662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.190768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.190797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.190900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.190929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.191928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.191962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.192057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.192083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.192191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.192217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.192338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.192365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.192463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.192490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.192624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.192650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.192750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.192776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.192869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.192896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.193033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.193075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.193196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.193238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.193347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.193377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.193487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.193515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.193634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.193662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.193772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.193800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.193899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.193928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.194034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.194066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.194167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.194194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.194318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.194345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.194436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.194463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.194579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.194606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.194709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.194740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.194871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.194900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.195024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.195052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.195145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.195173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.195274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.195302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.195438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.195466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.195555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.195583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.195702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.195744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.195882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.195912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.196017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.196046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.196151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.196178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.196300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.196327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.196422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.575 [2024-07-16 01:18:21.196449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.575 qpair failed and we were unable to recover it. 00:25:05.575 [2024-07-16 01:18:21.196578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.196608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.196704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.196732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.196829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.196857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.196953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.196989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.197087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.197115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.197217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.197244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.197370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.197398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.197517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.197544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.197676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.197717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.197827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.197857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.197971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.198114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.198253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.198381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.198503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.198627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.198763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.198886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.198916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.199021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.199049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.199146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.199173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.199277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.199306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.199429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.199457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.199560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.199606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.199740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.199768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.199865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.199897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.200922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.200952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.201092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.201120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.201220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.201247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.201346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.201374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.201479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.201507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.201608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.201635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.201733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.201763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.201899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.201927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.202027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.202055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.202156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.202183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.202310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.202340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.202437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.202465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.202565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.202593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.202720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.202750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.202857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.202899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.203008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.203038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.203133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.203160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.203262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.203291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.203393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.576 [2024-07-16 01:18:21.203421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.576 qpair failed and we were unable to recover it. 00:25:05.576 [2024-07-16 01:18:21.203520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.203548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.203639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.203666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.203777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.203818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.203918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.203947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.204053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.204083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.204185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.204213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.204313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.204341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.204436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.204463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.204565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.204593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.204736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.204778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.204880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.204909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.205107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.205235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.205384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.205521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.205642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.205762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.205890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.205996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.206027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.206125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.206152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.206256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.206283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.206375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.206403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.206497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.206526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.206639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.206680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.206786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.206815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.206965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.207146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.207267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.207396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.207520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.207677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.207811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.207948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.207996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.208114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.208142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.208239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.208267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.208370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.208397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.208492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.208520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.208644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.208671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.208773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.208806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.208935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.208975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.209075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.209105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.209217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.209258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.209361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.209390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.209485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.209512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.209636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.209664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.209757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.209784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.577 qpair failed and we were unable to recover it. 00:25:05.577 [2024-07-16 01:18:21.209884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.577 [2024-07-16 01:18:21.209911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.210051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.210173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.210302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.210476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.210595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.210730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.210880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.210984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.211115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.211244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.211397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.211523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.211650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.211809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.211941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.211977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.212084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.212112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.212207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.212234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.212333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.212361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.212460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.212490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.212589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.212618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.212711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.212738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.212885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.212912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.213922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.213953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.214063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.214092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.214192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.214220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.214320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.214348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.214478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.214507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.214638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.214679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.214789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.214818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.214916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.214943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.215056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.215083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.215182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.215210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.215332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.215359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.215457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.215485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.215587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.215615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.215712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.215744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.215879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.215907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.216083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.216217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.216367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.216516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.216632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.216754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.216875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.216977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.217006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.217132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.217159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.217249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.217278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.578 qpair failed and we were unable to recover it. 00:25:05.578 [2024-07-16 01:18:21.217371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.578 [2024-07-16 01:18:21.217398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.217522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.217549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.217666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.217707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.217857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.217897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.218043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.218174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.218299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.218424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.218567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.218728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.218865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.218982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.219105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.219234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.219355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.219475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.219598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.219725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.219885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.219913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.220048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.220179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.220293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.220447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.220591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.220730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.220861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.220977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.221115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.221244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.221366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.221493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.221656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.221825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.221961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.221989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.222083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.222110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.222207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.222234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.222355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.222382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.222499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.222526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.222622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.222649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.222775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.222815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.222925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.222954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.223059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.223086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.223213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.223239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.223331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.223357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.223464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.223493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.223597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.223625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.223723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.223751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.223871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.223898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.224000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.224028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.224128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.224156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.224256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.224283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.224406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.224433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.224532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.224560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.224664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.224691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.579 qpair failed and we were unable to recover it. 00:25:05.579 [2024-07-16 01:18:21.224793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.579 [2024-07-16 01:18:21.224821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.224912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.224939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.225953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.225987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.226110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.226136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.226239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.226265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.226360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.226387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.226481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.226507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.226630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.226660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.226778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.226819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.226939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.226986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.227090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.227127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.227236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.227265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.227418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.227445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.227567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.227593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.227693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.227721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.227823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.227853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.227953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.227988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.228090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.228115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.228210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.228236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.228332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.228357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.228485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.228514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.228638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.228665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.228783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.228810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.228907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.228934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.229045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.229073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.229202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.229229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.229325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.229352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.229448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.229478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.229628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.229655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.229749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.229776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.229876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.229904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.230072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.230201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.230324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.230450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.230577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.230709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.580 [2024-07-16 01:18:21.230844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.580 qpair failed and we were unable to recover it. 00:25:05.580 [2024-07-16 01:18:21.230952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.230988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.231144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.231271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.231392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.231519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.231648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.231768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.231894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.231994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.232128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.232281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.232408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.232531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.232661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.232792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.232921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.232947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.233055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.233081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.233205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.233232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.233352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.233379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.233472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.233499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.233596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.233625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.233747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.233774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.233899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.233926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.234036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.234064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.234161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.234190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.234294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.234321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.234428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.234456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.234556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.234584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.234721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.234762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.234891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.234920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.235084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.235216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.235342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.235492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.235618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.235750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.235872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.235999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.236126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.236254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.236386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.236530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.236649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.236773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.236918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.236970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.237100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.237140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.237252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.237281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.237384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.237412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.237510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.237538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.581 [2024-07-16 01:18:21.237695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.581 [2024-07-16 01:18:21.237723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.581 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.237819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.237847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.237951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.237987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.238116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.238144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.238250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.238278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.238377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.238404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.238498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.238526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.238649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.238676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.238776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.238804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.238969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.239107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.239231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.239358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.239490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.239638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.239754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.239875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.239902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.240967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.240994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.241081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.241108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.241212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.241243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.241370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.241398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.241498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.241525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.241647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.241673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.241764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.241792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.241891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.241918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.242926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.242953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.243057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.243085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.243186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.243213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.243334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.243361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.243457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.243484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.243610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.243639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.243790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.243820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.243917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.243943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.244048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.244075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.244174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.244200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.582 [2024-07-16 01:18:21.244305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.582 [2024-07-16 01:18:21.244331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.582 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.244430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.244458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.244557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.244585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.244684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.244711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.244811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.244841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.244966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.244994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.245115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.245142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.245241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.245268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.245395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.245423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.245523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.245550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.245655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.245683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.245775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.245802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.245909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.245936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.246041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.246068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.246170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.246197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.246324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.246351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.246482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.246509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.246608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.246637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.246738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.246764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.246865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.246894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.247878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.247904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.248035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.248161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.248284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.248440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.248589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.248737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.248868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.248996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.249121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.249272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.249394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.249516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.249671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.249795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.249919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.249946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.250052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.583 [2024-07-16 01:18:21.250078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.583 qpair failed and we were unable to recover it. 00:25:05.583 [2024-07-16 01:18:21.250184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.250210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.250304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.250330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.250453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.250480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.250601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.250628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.250718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.250745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.250842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.250871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.250988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.251029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.251129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.251158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.251259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.251286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.251408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.251435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.251583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.251611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.251705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.251732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.251859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.251886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.252008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.252048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.252179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.252208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.252347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.252374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.252505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.252531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.252632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.252658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.252753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.252784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.252885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.252911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.253035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.253075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.253189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.253229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.253378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.253406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.253499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.253526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.253618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.253645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.253747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.253774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.253871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.253898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.254011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.254039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.254135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.254162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.254260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.254288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.254419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.254446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.254545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.254573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.254707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.254736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.254840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.254867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.255873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.255900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.256079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.256198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.256321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.256476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.256625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.256770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.256886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.256992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.257020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-07-16 01:18:21.257117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-07-16 01:18:21.257144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.257241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.257267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.257360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.257387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.257500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.257541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.257656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.257683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.257788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.257817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.257916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.257963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.258059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.258086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.258209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.258247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.258390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.258418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.258523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.258550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.258645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.258671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.258765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.258792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.258886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.258912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.259011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.259039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.259134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.259160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.259291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.259318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.259412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.259438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.259596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.259626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.259754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.259781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.259914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.259972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.260083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.260112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.260206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.260239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.260357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.260384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.260485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.260512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.260611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.260638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.260737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.260764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.260893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.260921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.261025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.261051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.261146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.261172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.261273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.261298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.261395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.261421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.261518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.261544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.261698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.261725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.261827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.261867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.262000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.262040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.262150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.262178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.262298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.262325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.262419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.262446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.262547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.262573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.262701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.262728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.262832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.262872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.263008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.263038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.263162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.263189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.263320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.263347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.263446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.263472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.263603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.263630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.263724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.263751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.263885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.263913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.264047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.264076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.264183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.264212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.264317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.264345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.264476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.264502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.264598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.264626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.264721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.264747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.264864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.264891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-07-16 01:18:21.265067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-07-16 01:18:21.265107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.265224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.265268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.265402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.265428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.265558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.265585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.265709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.265736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.265837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.265863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.265965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.265999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.266116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.266157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.266307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.266348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.266452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.266481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.266579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.266606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.266758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.266786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.266884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.266912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.267946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.267979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.268081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.268110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.268234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.268274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.268408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.268436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.268539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.268567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.268693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.268719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.268812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.268839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.268976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.269097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.269223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.269362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.269492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.269638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.269760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.269886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.269912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.270024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.270052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.270145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.270172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.270282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.270310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.270405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.270432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.270557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.270585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.270687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.270714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.270827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.270867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.271003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.271031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.271167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.271207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.271336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.271364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.271463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.271491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.271600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.271631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.271726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.271752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.271842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.271869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.272000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.272027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.272144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.272172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.272276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.272303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.272400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.272427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-07-16 01:18:21.272565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-07-16 01:18:21.272592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.272697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.272724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.272858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.272899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.273050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.273079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.273173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.273203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.273309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.273334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.273433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.273459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.273564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.273591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.273681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.273707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.273856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.273897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.274032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.274072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.274207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.274235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.274371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.274397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.274492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.274519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.274644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.274673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.274772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.274800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.274913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.274970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.275075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.275103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.275234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.275262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.275361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.275388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.275483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.275519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.275612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.275639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.275753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.275793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.275913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.275953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.276952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.276985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.277079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.277104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.277232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.277262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.277367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.277393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.277495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.277521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.277621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.277646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.277772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.277797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.277893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.277918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.278931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.278963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-07-16 01:18:21.279062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-07-16 01:18:21.279088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.279192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.279218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.279338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.279363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.279484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.279510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.279610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.279637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.279779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.279819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.279933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.279982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.280106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.280147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.280283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.280311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.280406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.280432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.280555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.280581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.280678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.280705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.280814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.280854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.280995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.281040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.281153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.281184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.281336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.281364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.281459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.281486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.281583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.281610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.281711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.281739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.281839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.281867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.281972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.282002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.282124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.282150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.282274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.282301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.282427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.282454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.282554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.282580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.282678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.282703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.282852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.282878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.282989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.283018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.283137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.283177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.283279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.283308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.283432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.283460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.283554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.283581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.283714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.283745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.283871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.283898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.284001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.284027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.284125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.284150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.284250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.284276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.284404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.284429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.284564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.284592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.284719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.284749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.284879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.284911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.285942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.285975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.286116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.286156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.286264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.286292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-07-16 01:18:21.286384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-07-16 01:18:21.286411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.286514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.286540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.286649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.286679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.286815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.286842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.286951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.286984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.287091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.287118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.287219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.287245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.287376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.287403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.287500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.287528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.287628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.287654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.287755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.287783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.287885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.287911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.288027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.288054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.288176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.288202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.288301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.288327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.288418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.288443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.288573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.288601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.288733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.288761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.288876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.288903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.289940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.289972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.290070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.290096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.290195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.290220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.290346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.290372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.290539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.290579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.290677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.290704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.290837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.290866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.290973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.291096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.291248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.291379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.291535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.291655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.291778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.291915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.291965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.292104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.292134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.292228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.292255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.292353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.292380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.292502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.292529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.292625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.292654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.292751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.292779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.292878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.292905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-07-16 01:18:21.293949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-07-16 01:18:21.293984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.294111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.294141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.294271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.294296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.294396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.294422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.294541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.294566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.294667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.294693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.294796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.294836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.294983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.295128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.295251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.295378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.295512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.295638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.295788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.295914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.295942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.296063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.296089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.296186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.296212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.296319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.296355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.296447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.296473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.296623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.296649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.296780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.296806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.296896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.296922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.297021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.297053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.297148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.297175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.297303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.297330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.297459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.297484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.297577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.297604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.297725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.297751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.297845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.297876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.298878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.298905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.299005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.299033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.299124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.299151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.299272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.299299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.299391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.299417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.299537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.299564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.299692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-07-16 01:18:21.299719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-07-16 01:18:21.299824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.299850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.299969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.300131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.300284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.300410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.300539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.300696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.300827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.300952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.300990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.301093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.301121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.301222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.301250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.301345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.301372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.301486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.301518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.301623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.301651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.301752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.301779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.301894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.301920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.302027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.302054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.302147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.302173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.302320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.302346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.302447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.302476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.302599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.302639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.302758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.302798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.302910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.302938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.303081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.303204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.303351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.303482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.303614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.303747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.303869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.303974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.304097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.304219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.304343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.304473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.304642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.304817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.304939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.304973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.305078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.305104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.305238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.305267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.305367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.305394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.305492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.305521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.305622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.305648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.305746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.305775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.305879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.305907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.306012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.306039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.306146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.306173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.306269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.306297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.306390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.306417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.306542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.306568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.306723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.306750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-07-16 01:18:21.306853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-07-16 01:18:21.306882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.307006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.307041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.307164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.307191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.307315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.307342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.307501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.307527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.307620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.307647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.307769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.307796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.307890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.307917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.308029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.308057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.308180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.308207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.308326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.308353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.308447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.308475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.308596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.308623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.308747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.308775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.308874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.308903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.309021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.309050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.309180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.309206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.309325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.309352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.309457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.309484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.309607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.309633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.309759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.309786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.309888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.309915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.310041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.310164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.310316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.310439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.310564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.310717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.310893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.310998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.311166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.311314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.311445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.311573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.311699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.311848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.311968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.311995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.312105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.312133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.312236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.312265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.312384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.312410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.312514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.312540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.312633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.312664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.312768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.312794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.312894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.312922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.313910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.313936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.314039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.314066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.314165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.314191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-07-16 01:18:21.314318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-07-16 01:18:21.314344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.314477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.314504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.314628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.314654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.314746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.314773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.314864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.314890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.315970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.315998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.316122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.316149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.316237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.316268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.316358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.316384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.316508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.316536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.316674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.316701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.316789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.316816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.316933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.316969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.317129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.317254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.317378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.317492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.317611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.317742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.317866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.317988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.318124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.318242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.318391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.318512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.318640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.318814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.318952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.318999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.319142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.319182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.319283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.319311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.319407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.319433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.319534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.319560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.319694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.319735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.319835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.319864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.319975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.320007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.320103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.320131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.320228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.320255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.320379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-07-16 01:18:21.320407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-07-16 01:18:21.320509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.320536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.320631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.320660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.320757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.320783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.320885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.320912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.321931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.321969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.322075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.322103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.322228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.322255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.322378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.322405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.322528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.322555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.322652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.322678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.322784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.322811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.322922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.322973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.323093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.323122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.323217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.323244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.323366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.323392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.323492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.323518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.323619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.323645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.323748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.323776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.323903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.323930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.324047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.324089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.324192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.324220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.324343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.324369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.324466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.324492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.324587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.324612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.324750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.324776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.324903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.324929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.325029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.325055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.325164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.325204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.325307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.325335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.325449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.325490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.325606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.325635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.325765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.325792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.325884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.325911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.326023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.326051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.326174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.326214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.326351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-07-16 01:18:21.326380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-07-16 01:18:21.326484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.326511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.326605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.326632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.326750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.326790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.326888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.326916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.327928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.327960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.328084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.328111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.328213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.328242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.328380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.328407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.328499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.328526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.328623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.328651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.328751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.328778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.328894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.328934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.329046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.329073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.329190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.329230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.329336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.329365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.329490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.329517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.329641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.329668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.329761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.329788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.329891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.329931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.330081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.330211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.330360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.330486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.330621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.330747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.330895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.330999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.331032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.331127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.331154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.331249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.331276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.331400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.331428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.331522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.331549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.331641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.331668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.331781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.331821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.331966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.332008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.332127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.332158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.332286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.332313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.332434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.332460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.332561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.332588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.332689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.332716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.332819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.332846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.332979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.333007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.333109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.333136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.333240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.333271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-07-16 01:18:21.333378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-07-16 01:18:21.333405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.333504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.333531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.333625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.333650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.333747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.333867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.333892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.333990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.334137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.334294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.334415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.334565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.334712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.334835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.334966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.334993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.335082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.335107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.335202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.335228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.335344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.335370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.335469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.335495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.335617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.335643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.335734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.335761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.335859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.335885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.336016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.336043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.336168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.336194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.336295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.336321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.336455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.336486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.336580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.336605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.336703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.336728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.336842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.336883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.337007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.337048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.337155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.337184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.337337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.337365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.337465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.337492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.337599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.337626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.337755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.337784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.337909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.337935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.338063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.338178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.338322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.338444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.338590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.338707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.338885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.338991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.339139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.339284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.339469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.339596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.339724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.339846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.339969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.339996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.340100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.340127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.340242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.340273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.340373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.340401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.340521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.340547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.340640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.340667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-07-16 01:18:21.340769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-07-16 01:18:21.340795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.340895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.340924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.341059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.341189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.341318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.341443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.341597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.341744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.341879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.341985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.342150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.342270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.342395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.342516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.342640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.342760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.342908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.342934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.343947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.343980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.344076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.344102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.344199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.344226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.344330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.344357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.344450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.344477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.344572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.344601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.344714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.344755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.344863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.344903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.345022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.345049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.345151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.345177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.345275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.345302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.345406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.345431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.345529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.345562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.345693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.345724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.345859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.345888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.346040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.346167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.346316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.346442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.346566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.346729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.346870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.346972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-07-16 01:18:21.347000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-07-16 01:18:21.347121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.347148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.347246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.347273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.347372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.347399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.347512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.347539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.347643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.347673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.347771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.347798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.347922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.347949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.348965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.348993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.349116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.349250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.349401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.349522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.349643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.349757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.349878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.349983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.350010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.350137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.350163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.350268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.350294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.350395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.350421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.350528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.350559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.350715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.350745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.350872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.350900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.351021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.351049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.351155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.351182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.351278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.351306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.351407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.351434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.351545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.351572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.351710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.351750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.351876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.351904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.352953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.352989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.353095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.353123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.353226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.353253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.353377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.353404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.353530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.353558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.353649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.353676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.353775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.353805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.353929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.353963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.354066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-07-16 01:18:21.354093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-07-16 01:18:21.354187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.354214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.354310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.354337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.354444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.354472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.354596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.354623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.354722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.354754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.354860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.354900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.355009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.355038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.355164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.355193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.355290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.355317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.355415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.355442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.355575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.355602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.355709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.355737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.355861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.355889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.356963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.356991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.357084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.357112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.357222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.357262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.357374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.357404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.357531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.357559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.357659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.357686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.357785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.357813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.357922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.357971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.358088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.358116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.358219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.358246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.358368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.358395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.358522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.358549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.358674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.358705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.358812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.358841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.358953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.359117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.359271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.359393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.359519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.359668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.359795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.359947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.359982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.360097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.360125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.360252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.360284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.360385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.360414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.360519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.360546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.360671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.360699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.360799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.360828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.360949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.360979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.361089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-07-16 01:18:21.361117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-07-16 01:18:21.361217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.361244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.361338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.361365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.361453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.361479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.361586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.361615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.361724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.361764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.361873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.361900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.361998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.362127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.362262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.362404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.362529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.362657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.362786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.362902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.362929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.363070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.363193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.363316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.363441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.363599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.363752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.363884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.363986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.364122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.364246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.364375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.364510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.364668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.364792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.364938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.364973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.365073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.365101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.365200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.365226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.365349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.365376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.365511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.365540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.365648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.365688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.365821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.365848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.365961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.365988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.366098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.366125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.366220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.366245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.366338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.366364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.366463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.366492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.366586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.366612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.366712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.366738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-07-16 01:18:21.366842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-07-16 01:18:21.366868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.366996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.367127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.367245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.367400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.367525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.367650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.367779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.367937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.367986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.368118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.368147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.368271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.368298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.368396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.368423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.368529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.368556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.368699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.368739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.368837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.368864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.368977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.369112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.369237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.369368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.369491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.369620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.369749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.369876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.369903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.370060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.370218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.370352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.370482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.370609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.370755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.370879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.370979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.371106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.371259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.371408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.371541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.371657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.371804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.371939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.371971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.372950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.372997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.373117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.373159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.373290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.373318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.373417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.373443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.373536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-07-16 01:18:21.373561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-07-16 01:18:21.373686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.373711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.373833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.373874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.373990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.374134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.374286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.374408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.374533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.374659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.374794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.374950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.374992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.375100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.375128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.375260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.375287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.375383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.375410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.375502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.375529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.375630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.375657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.375805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.375833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.375938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.375979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.376084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.376110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.376216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.376242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.376367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.376392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.376518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.376544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.376658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.376698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.376801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.376829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.376939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.376972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.377078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.377106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.377213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.377240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.377337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.377363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.377487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.377514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.377646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.377676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.377768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.377794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.377898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.377927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.378060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.378086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.378207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.378247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.378381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.378410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.378509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.378536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.378630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.378656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.378748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.378783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.378878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.378907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.379048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.379076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.379171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.379198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.379294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.379321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.379419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.379445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.379591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.379618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.379745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.379773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.379895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.379924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.380040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.380070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.380172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.380198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.380289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.380315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.380445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.380471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.380570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.380596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-07-16 01:18:21.380729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-07-16 01:18:21.380757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.380853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.380880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.380984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.381013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.381120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.381147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.381282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.381324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.381466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.381496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.381594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.381622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.381750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.381777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.381882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.381909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.382047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.382172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.382321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.382485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.382607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.382730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.382875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.382992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.383141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.383273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.383396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.383543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.383669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.383804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.383935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.383967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.384068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.384094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.384216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.384242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.384358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.384388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.384487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.384514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.384610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.384637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.384730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.384756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.384902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.384929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.385036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.385065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.385164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.385191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.385285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.385312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.385406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.385433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.385540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.385580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.385690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.385731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.385861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.385890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.386051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.386186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.386316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.386442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.386567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.386719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.386842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.386973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.387001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.387099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.387126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.387250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.387277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.387398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.387425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.387567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.387608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.387733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.387761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.387858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.387885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.387980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.388007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-07-16 01:18:21.388111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-07-16 01:18:21.388143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.388253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.388294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.388394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.388422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.388542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.388570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.388659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.388686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.388787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.388815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.388952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.388998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.389135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.389163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.389266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.389295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.389425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.389453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.389554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.389583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.389686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.389715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.389861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.389888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.389984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.390120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.390243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.390369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.390492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.390622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.390753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.390898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.390925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.391028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.391056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.391178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.391204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.391329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.391356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.391477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.391503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.391609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.391639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.391737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.391765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.391867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.392916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.392943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.393056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.393083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.393175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.393201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-07-16 01:18:21.393301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-07-16 01:18:21.393327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.393442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.393468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.393569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.393610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.393745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.393774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.393877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.393906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.394964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.394992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.395120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.395147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.395246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.395274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.395374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.395401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.395503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.395529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.395628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.395654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.395746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.395772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.395884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.395913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.396949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.396983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.397079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.397106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.397205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.397237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.397332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.397358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.397448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.397475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.397576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.397602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.397727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.397754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.397867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.397908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.398028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-07-16 01:18:21.398058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-07-16 01:18:21.398154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.398180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.398291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.398318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.398418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.398445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.398552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.398580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.398679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.398709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.398807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.398835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.398975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.399109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.399279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.399402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.399525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.399677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.399797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.399924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.399976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.400113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.400140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.400244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.400281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.400378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.400405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.400536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.400562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.400660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.400687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.400782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.400808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.400940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.400988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.401941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.401976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.402080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.402106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.402202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.402228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.402321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.402347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.402497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.402523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.402617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.402643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.402739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.402765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.402882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.402910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.403023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.403062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.403175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.403216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.403356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-07-16 01:18:21.403386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-07-16 01:18:21.403515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.403543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.403646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.403673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.403771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.403798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.403909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.403937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.404047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.404075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.404180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.404206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.404301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.404328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.404423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.404450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.404579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.404606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.404710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.404737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.404860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.404887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.405010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.405037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.405137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.405164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.405262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.405288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.405410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.405437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.405534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.405562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.405684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.405725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.405884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.405924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.406042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.406071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.406173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.406201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.406305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.406332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.406434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.406466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.406564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.406590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.406702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.406742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.406848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.406877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.407003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.407031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.407159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.407186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.407309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.407336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.407430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.407457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.407584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.407612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.407717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.407757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.407867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.407895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.408019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.408048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.408169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.408197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.408298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.408325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.408436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.408465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.408604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-07-16 01:18:21.408633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-07-16 01:18:21.408753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.408794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.408924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.408952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.409063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.409090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.409187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.409213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.409340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.409366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.409501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.409528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.409618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.409645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.409749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.409778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.409900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.409926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.410971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.410999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.411103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.411130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.411258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.411285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.411398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.411427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.411529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.411557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.411660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.411687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.411784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.411811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.411909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.411937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.412066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.412111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.412221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.412251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.412348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.412375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.412472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.412499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.412649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.412676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.412799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.412826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.412970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.413011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.413143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.413171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.413268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.413294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.413412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.413438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.413557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.413583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-07-16 01:18:21.413678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-07-16 01:18:21.413704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.413800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.413829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.413965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.413993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.414104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.414131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.414229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.414256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.414355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.414386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.414508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.414535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.414633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.414660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.414752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.414778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.414882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.414909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.415030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.415059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.415186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.415213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.415311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.415338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.415446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.415474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.415597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.415623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.415745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.415771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.415902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.415937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.416074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.416226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.416352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.416503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.416629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.416754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.416875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.416979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.417104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.417253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.417385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.417504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.417630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.417784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.417912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.417939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.418039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.418066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.418188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.418215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-07-16 01:18:21.418344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-07-16 01:18:21.418372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.418503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.418529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.418659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.418688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.418787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.418814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.418905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.418931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.419052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.419092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.419224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.419252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.419350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.419377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.419505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.419532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.419635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.419664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.419759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.419785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.419906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.419932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.420044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.420071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.420176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.420203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.420301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.420329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.420451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.420478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.420582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.420609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.420724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.420765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.420932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.420973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.421079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.421107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.421233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.421259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.421358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.421385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.421536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.421568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.421672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.421699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.421800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.421829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.421927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.421962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.422058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.422085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.422182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.422209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.422310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.422339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.422464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.422491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.422611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.422651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.422795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.422834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.422939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.422974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.423080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.423108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.423208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.423235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.423335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.423363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.423465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.423493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.423591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.423618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.423717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.423744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.423870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-07-16 01:18:21.423897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-07-16 01:18:21.424013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.424137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.424264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.424391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.424532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.424685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.424815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.424945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.424979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.425073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.425101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.425209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.425250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.425385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.425412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.425509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.425535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.425632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.425659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.425756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.425784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.425895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.425926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.426041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.426069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.426168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.426194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.426315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.426341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.426427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.426453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.426566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.426594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.426707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.426746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.426848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.426877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.427953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.427985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.428113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.428140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.428243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.428269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.428362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.428389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.428513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.428538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.428634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.428659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.428775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.428815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.428930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.428965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.429070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-07-16 01:18:21.429098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-07-16 01:18:21.429202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.429230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.429353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.429380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.429479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.429505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.429629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.429656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.429764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.429794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.429913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.429953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.430069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.430098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.430191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.430217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.430313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.430339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.430460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.430486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.430582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.430609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.430716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.430762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.430894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.430924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.431057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.431085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.431179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.431206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.431307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.431334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.431488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.431515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.431624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.431650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.431779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.431806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.431897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.431933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.432033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.432060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.432177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.432218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.432355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.432385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.432483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.432510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.432603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.432630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.432770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.432811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.432938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.432973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.433962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.433990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.434094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.434121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.434224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.434251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.434372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.434399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-07-16 01:18:21.434507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-07-16 01:18:21.434534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.434635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.434663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.434761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.434788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.434881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.434908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.435884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.435910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.436044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.436177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.436327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.436473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.436611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.436734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.436857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.436978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.437896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.437994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.438022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.438124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.438151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.438248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.438275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.438368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.438395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.438494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-07-16 01:18:21.438521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-07-16 01:18:21.438640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.438665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.438762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.438787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.438883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.438909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.439944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.439976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.440081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.440106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.440226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.440252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.440378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.440404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.440503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.440530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.440650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.440677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.440779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.440807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.440937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.440970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.441096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.441122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.441219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.441247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.441371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.441398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.441519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.441550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.441653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.441682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.441796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.441837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.441988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.442137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.442265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.442391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.442514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.442646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.442792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.442937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.442973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.443083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.443110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.443210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.443236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.443334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.443361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.443492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.443519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.443611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.443639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.443733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-07-16 01:18:21.443759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-07-16 01:18:21.443858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.443887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.443984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.444115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.444232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.444360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.444484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.444648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.444777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.444929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.444961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.445059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.445086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.445187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.445218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.445322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.445349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.445447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.445475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.445599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.445625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.445725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.445753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.445864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.445904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.446971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.446999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.447099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.447128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.447246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.447286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.447396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.447424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.447524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.447550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.447657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.447684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.447775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.447800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.447897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.447923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.448965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.448993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.449090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.449117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-07-16 01:18:21.449220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-07-16 01:18:21.449247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.449344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.449372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.449496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.449523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.449620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.449647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.449744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.449771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.449872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.449898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.449998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.450119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.450240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.450358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.450480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.450627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.450760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.450884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.450910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.451939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.451977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.452076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.452102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.452201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.452228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.452327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.452355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.452479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.452509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.452607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.452634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.452727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.452754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.452878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.452906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-07-16 01:18:21.453931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-07-16 01:18:21.453970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.454071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.454103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.454194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.454221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.454345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.454372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.454467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.454494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.454617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.454644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.454750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.454778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.454890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.454932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.455090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.455131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.455249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.455278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.455377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.455405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.455508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.455536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.455637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.455665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.455766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.455793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.455906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.455933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.456928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.456961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.457063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.457091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.457195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.457223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.457348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.457375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.457467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.457494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.457591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.457618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.457746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.457781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.457895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.457937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.458057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.458086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.458188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.458215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.458324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.458352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.458459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.458487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.458616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.458645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.458769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.458796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.458889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.458916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.459023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.459051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.459153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.459180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.459285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.459312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.459430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.459457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.617 [2024-07-16 01:18:21.459554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-07-16 01:18:21.459580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.459682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.459710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.459832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.459859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.459969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.460126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.460274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.460407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.460529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.460658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.460787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.460930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.460986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.461097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.461125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.461223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.461250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.461346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.461373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.461500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.461532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.461632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.461661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.461757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.461785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.461878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.461905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.462959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.462989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.463113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.463140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.463229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.463257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.463363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.463391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.463485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.463512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.463602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.463629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.463716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.618 [2024-07-16 01:18:21.463744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.618 qpair failed and we were unable to recover it. 00:25:05.618 [2024-07-16 01:18:21.463844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.463871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.464053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.464187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.464319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.464448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.464627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.464758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.464881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.464978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.465121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.465273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.465428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.465548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.465705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.465833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.465963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.465992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.466090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.466117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.466218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.466259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.466355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.466383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.466514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.466544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.466645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.466671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.466762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.466789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.466926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.466979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.467086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.467115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.467213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.467242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.619 [2024-07-16 01:18:21.467347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.467390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.467494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.467521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:05.619 [2024-07-16 01:18:21.467649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.467685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.467788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.467817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.467923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.467962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:05.619 [2024-07-16 01:18:21.468065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.468093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.468201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.468230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:05.619 [2024-07-16 01:18:21.468353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.468381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.468476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.468501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.619 [2024-07-16 01:18:21.468629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.468660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.468762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.468788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.468914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.468941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.469048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.469076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.619 [2024-07-16 01:18:21.469173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.619 [2024-07-16 01:18:21.469200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.619 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.469303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.469332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.469430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.469458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.469556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.469586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.469677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.469705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.469828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.469856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.469981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.470123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.470246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.470394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.470552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.470677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.470799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.470924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.470953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.471070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.471097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.471189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.471217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.471337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.471364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.471492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.471522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.471630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.471658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.471752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.471779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.471881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.471910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.472037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.472067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.472164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.472191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.472349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.472376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.472466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.472494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.472604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.472631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.472732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.472761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.472858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.472886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.473015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.473043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.473143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.473170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.473299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.473326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.473462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.473489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.473608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.473636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.473733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.473760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.473870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.473911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.474056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.474085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.474194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.474222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.474325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.474352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.474472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.620 [2024-07-16 01:18:21.474499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.620 qpair failed and we were unable to recover it. 00:25:05.620 [2024-07-16 01:18:21.474594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.474621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.474727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.474755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.474850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.474878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.474978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.475116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.475239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.475369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.475494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.475621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.475798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.475933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.475966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.476109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.476226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.476366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.476512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.476637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.476764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.476897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.476995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.477121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.477272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.477397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.477552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.477676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.477804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.477948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.477995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.478111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.478141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.478239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.478266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.478364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.478392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.478519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.478546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.478638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.478665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.478765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.478791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.478888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.478915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.479014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.479041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.479136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.479163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.479277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.479318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.479454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.479481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.479585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.479618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.479714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.479740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.479866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.479892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.621 qpair failed and we were unable to recover it. 00:25:05.621 [2024-07-16 01:18:21.480013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.621 [2024-07-16 01:18:21.480055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.480188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.480216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.480324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.480352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.480451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.480478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.480601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.480628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.480723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.480749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.480845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.480871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.480970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.480998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.481124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.481151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.481241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.481267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.481371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.481397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.481497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.481524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.481677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.481703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.481811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.481852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.481972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.482116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.482239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.482362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.482489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.482611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.482769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.482898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.482926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.483070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.483201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.483341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.483462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.483587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.483715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.483863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.483974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.484002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.484126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.484152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.484243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.484269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.484386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.484411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.484535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.484560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.622 [2024-07-16 01:18:21.484673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.622 [2024-07-16 01:18:21.484699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.622 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.484821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.484847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.484976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.485110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.485250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.485378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.485501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.485623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.485781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.485929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.485964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.486062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.486089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.486214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.486240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.486364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.486391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.486538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.486564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.486658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.486685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.486790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.486816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.486914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.486941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.487048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.487074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.487170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.487198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.487329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.487356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.487466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.487493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.487589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.487615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.487744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.487772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.487875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.487902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 A controller has encountered a failure and is being reset. 00:25:05.623 [2024-07-16 01:18:21.488052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.488093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a60000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.488212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.488252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.488355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.488384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.488478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.488506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.488629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.488657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.488783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.488810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a58000b90 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.623 [2024-07-16 01:18:21.488972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.623 [2024-07-16 01:18:21.489014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.623 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.489125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.489153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.489248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.489275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.489400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.489427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.489556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.489582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.489704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.489730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.489858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.489886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.489980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.490141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.490311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.490434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.490561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.490688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.490812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.490945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.490982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.491084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.491111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.491209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.491235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.491366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.491394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.491483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.491509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.491610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.491637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.491771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.491798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73f0 with addr=10.0.0.2, port=4420 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.491908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.491951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a68000b90 with addr=10.0.0.2, port=4420 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.624 qpair failed and we were unable to recover it. 00:25:05.624 [2024-07-16 01:18:21.492122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.624 [2024-07-16 01:18:21.492170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c53f0 with addr=10.0.0.2, port=4420 00:25:05.624 [2024-07-16 01:18:21.492193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c53f0 is same with the state(5) to be set 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:05.624 [2024-07-16 01:18:21.492221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c53f0 (9): Bad file descriptor 00:25:05.624 [2024-07-16 01:18:21.492254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.624 [2024-07-16 01:18:21.492269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.624 [2024-07-16 01:18:21.492287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.624 Unable to reset the controller. 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.624 Malloc0 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.624 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.624 [2024-07-16 01:18:21.523888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.882 [2024-07-16 01:18:21.552185] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.882 01:18:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 61447 00:25:06.813 Controller properly reset. 00:25:10.985 Initializing NVMe Controllers 00:25:10.985 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:10.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:10.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:10.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:10.985 Initialization complete. Launching workers. 00:25:10.985 Starting thread on core 1 00:25:10.985 Starting thread on core 2 00:25:10.985 Starting thread on core 3 00:25:10.985 Starting thread on core 0 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:10.985 00:25:10.985 real 0m10.747s 00:25:10.985 user 0m31.579s 00:25:10.985 sys 0m7.479s 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.985 ************************************ 00:25:10.985 END TEST nvmf_target_disconnect_tc2 00:25:10.985 ************************************ 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.985 rmmod nvme_tcp 00:25:10.985 rmmod nvme_fabrics 00:25:10.985 rmmod nvme_keyring 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 61929 ']' 00:25:10.985 01:18:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 61929 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 61929 ']' 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 61929 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61929 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61929' 00:25:10.986 killing process with pid 61929 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 61929 00:25:10.986 01:18:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 61929 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.244 01:18:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.774 01:18:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.774 00:25:13.774 real 0m15.695s 00:25:13.774 user 0m56.964s 00:25:13.774 sys 0m10.015s 00:25:13.774 01:18:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.774 01:18:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 ************************************ 00:25:13.774 END TEST nvmf_target_disconnect 00:25:13.774 ************************************ 00:25:13.774 01:18:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:13.774 01:18:29 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:13.774 01:18:29 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.774 01:18:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 01:18:29 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:13.774 00:25:13.774 real 19m11.616s 00:25:13.774 user 45m5.504s 00:25:13.774 sys 5m2.636s 00:25:13.774 01:18:29 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.774 01:18:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 ************************************ 00:25:13.774 END TEST nvmf_tcp 00:25:13.774 ************************************ 00:25:13.774 01:18:29 -- common/autotest_common.sh@1142 -- # return 0 00:25:13.774 01:18:29 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:13.774 01:18:29 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:13.774 01:18:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.774 01:18:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.774 01:18:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 ************************************ 00:25:13.774 START TEST spdkcli_nvmf_tcp 00:25:13.774 ************************************ 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:13.774 * Looking for test storage... 00:25:13.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=63130 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 63130 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 63130 ']' 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 [2024-07-16 01:18:29.405410] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:25:13.774 [2024-07-16 01:18:29.405486] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63130 ] 00:25:13.774 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.774 [2024-07-16 01:18:29.462360] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:13.774 [2024-07-16 01:18:29.570345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.774 [2024-07-16 01:18:29.570350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.774 01:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.775 01:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:13.775 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:13.775 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:13.775 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:13.775 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:13.775 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:13.775 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:13.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:13.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:13.775 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:13.775 ' 00:25:16.301 [2024-07-16 01:18:32.215801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.672 [2024-07-16 01:18:33.436080] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:20.200 [2024-07-16 01:18:35.687017] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:22.099 [2024-07-16 01:18:37.621014] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:23.471 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:23.471 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:23.471 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:23.471 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:23.471 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:23.471 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:23.471 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:23.471 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.471 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.471 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:23.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:23.471 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:23.471 01:18:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.728 01:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:23.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:23.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:23.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:23.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:23.728 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:23.728 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.728 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:23.728 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:23.728 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:23.728 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:23.728 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:23.728 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:23.728 ' 00:25:28.982 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:28.982 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:28.982 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.982 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:28.982 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:28.982 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:28.982 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:28.982 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.982 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:28.982 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:28.982 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:28.982 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:28.982 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:28.982 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 63130 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 63130 ']' 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 63130 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63130 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63130' 00:25:28.982 killing process with pid 63130 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 63130 00:25:28.982 01:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 63130 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 63130 ']' 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 63130 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 63130 ']' 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 63130 00:25:29.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (63130) - No such process 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 63130 is not found' 00:25:29.241 Process with pid 63130 is not found 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:29.241 00:25:29.241 real 0m15.884s 00:25:29.241 user 0m33.428s 00:25:29.241 sys 0m0.815s 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.241 01:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.241 ************************************ 00:25:29.241 END TEST spdkcli_nvmf_tcp 00:25:29.241 ************************************ 00:25:29.241 01:18:45 -- common/autotest_common.sh@1142 -- # return 0 00:25:29.241 01:18:45 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.241 01:18:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:29.241 01:18:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.241 01:18:45 -- common/autotest_common.sh@10 -- # set +x 00:25:29.241 ************************************ 00:25:29.241 START TEST nvmf_identify_passthru 00:25:29.241 ************************************ 00:25:29.241 01:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.500 * Looking for test storage... 00:25:29.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.500 01:18:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.500 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.501 01:18:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.501 01:18:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.501 01:18:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.501 01:18:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.501 01:18:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.501 01:18:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.501 01:18:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:29.501 01:18:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.501 01:18:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.501 01:18:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:29.501 01:18:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:29.501 01:18:45 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.501 01:18:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.402 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:31.403 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:31.403 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:31.403 Found net devices under 0000:09:00.0: cvl_0_0 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:31.403 Found net devices under 0000:09:00.1: cvl_0_1 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.403 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.661 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:31.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:25:31.662 00:25:31.662 --- 10.0.0.2 ping statistics --- 00:25:31.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.662 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:25:31.662 00:25:31.662 --- 10.0.0.1 ping statistics --- 00:25:31.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.662 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:31.662 01:18:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:31.662 01:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.662 01:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:25:31.662 01:18:47 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:25:31.662 01:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:25:31.662 01:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:25:31.662 01:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:31.662 01:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:31.662 01:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:31.662 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.847 01:18:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:25:35.847 01:18:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:35.847 01:18:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:35.847 01:18:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:35.847 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.034 01:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:40.034 01:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.034 01:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.034 01:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=67632 00:25:40.034 01:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:40.034 01:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.034 01:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 67632 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 67632 ']' 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.034 01:18:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.034 [2024-07-16 01:18:55.827190] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:25:40.034 [2024-07-16 01:18:55.827284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.034 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.034 [2024-07-16 01:18:55.889507] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.034 [2024-07-16 01:18:55.990176] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.034 [2024-07-16 01:18:55.990230] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.034 [2024-07-16 01:18:55.990253] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.034 [2024-07-16 01:18:55.990263] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.034 [2024-07-16 01:18:55.990273] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.034 [2024-07-16 01:18:55.990350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.034 [2024-07-16 01:18:55.990415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.034 [2024-07-16 01:18:55.990481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.034 [2024-07-16 01:18:55.990484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.034 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.034 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:40.034 01:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:40.034 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.034 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.292 INFO: Log level set to 20 00:25:40.292 INFO: Requests: 00:25:40.292 { 00:25:40.292 "jsonrpc": "2.0", 00:25:40.292 "method": "nvmf_set_config", 00:25:40.292 "id": 1, 00:25:40.292 "params": { 00:25:40.292 "admin_cmd_passthru": { 00:25:40.292 "identify_ctrlr": true 00:25:40.292 } 00:25:40.292 } 00:25:40.292 } 00:25:40.292 00:25:40.292 INFO: response: 00:25:40.292 { 00:25:40.292 "jsonrpc": "2.0", 00:25:40.292 "id": 1, 00:25:40.292 "result": true 00:25:40.292 } 00:25:40.292 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.292 01:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.292 INFO: Setting log level to 20 00:25:40.292 INFO: Setting log level to 20 00:25:40.292 INFO: Log level set to 20 00:25:40.292 INFO: Log level set to 20 00:25:40.292 INFO: Requests: 00:25:40.292 { 00:25:40.292 "jsonrpc": "2.0", 00:25:40.292 "method": "framework_start_init", 00:25:40.292 "id": 1 00:25:40.292 } 00:25:40.292 00:25:40.292 INFO: Requests: 00:25:40.292 { 00:25:40.292 "jsonrpc": "2.0", 00:25:40.292 "method": "framework_start_init", 00:25:40.292 "id": 1 00:25:40.292 } 00:25:40.292 00:25:40.292 [2024-07-16 01:18:56.137116] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:40.292 INFO: response: 00:25:40.292 { 00:25:40.292 "jsonrpc": "2.0", 00:25:40.292 "id": 1, 00:25:40.292 "result": true 00:25:40.292 } 00:25:40.292 00:25:40.292 INFO: response: 00:25:40.292 { 00:25:40.292 "jsonrpc": "2.0", 00:25:40.292 "id": 1, 00:25:40.292 "result": true 00:25:40.292 } 00:25:40.292 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.292 01:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.292 INFO: Setting log level to 40 00:25:40.292 INFO: Setting log level to 40 00:25:40.292 INFO: Setting log level to 40 00:25:40.292 [2024-07-16 01:18:56.147123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.292 01:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.292 01:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.292 01:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.632 Nvme0n1 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.632 [2024-07-16 01:18:59.045063] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.632 [ 00:25:43.632 { 00:25:43.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:43.632 "subtype": "Discovery", 00:25:43.632 "listen_addresses": [], 00:25:43.632 "allow_any_host": true, 00:25:43.632 "hosts": [] 00:25:43.632 }, 00:25:43.632 { 00:25:43.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.632 "subtype": "NVMe", 00:25:43.632 "listen_addresses": [ 00:25:43.632 { 00:25:43.632 "trtype": "TCP", 00:25:43.632 "adrfam": "IPv4", 00:25:43.632 "traddr": "10.0.0.2", 00:25:43.632 "trsvcid": "4420" 00:25:43.632 } 00:25:43.632 ], 00:25:43.632 "allow_any_host": true, 00:25:43.632 "hosts": [], 00:25:43.632 "serial_number": "SPDK00000000000001", 00:25:43.632 "model_number": "SPDK bdev Controller", 00:25:43.632 "max_namespaces": 1, 00:25:43.632 "min_cntlid": 1, 00:25:43.632 "max_cntlid": 65519, 00:25:43.632 "namespaces": [ 00:25:43.632 { 00:25:43.632 "nsid": 1, 00:25:43.632 "bdev_name": "Nvme0n1", 00:25:43.632 "name": "Nvme0n1", 00:25:43.632 "nguid": "E0545FDC53AD461CA7960C772E70FD7C", 00:25:43.632 "uuid": "e0545fdc-53ad-461c-a796-0c772e70fd7c" 00:25:43.632 } 00:25:43.632 ] 00:25:43.632 } 00:25:43.632 ] 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:43.632 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:43.632 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:43.632 01:18:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.632 rmmod nvme_tcp 00:25:43.632 rmmod nvme_fabrics 00:25:43.632 rmmod nvme_keyring 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 67632 ']' 00:25:43.632 01:18:59 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 67632 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 67632 ']' 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 67632 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67632 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67632' 00:25:43.632 killing process with pid 67632 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 67632 00:25:43.632 01:18:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 67632 00:25:45.581 01:19:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:45.581 01:19:01 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:45.581 01:19:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:45.581 01:19:01 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.581 01:19:01 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:45.581 01:19:01 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.581 01:19:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:45.581 01:19:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.482 01:19:03 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:47.482 00:25:47.482 real 0m17.928s 00:25:47.482 user 0m26.674s 00:25:47.482 sys 0m2.326s 00:25:47.482 01:19:03 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.482 01:19:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:47.482 ************************************ 00:25:47.482 END TEST nvmf_identify_passthru 00:25:47.482 ************************************ 00:25:47.482 01:19:03 -- common/autotest_common.sh@1142 -- # return 0 00:25:47.482 01:19:03 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:47.482 01:19:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:47.482 01:19:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.482 01:19:03 -- common/autotest_common.sh@10 -- # set +x 00:25:47.482 ************************************ 00:25:47.482 START TEST nvmf_dif 00:25:47.482 ************************************ 00:25:47.482 01:19:03 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:47.482 * Looking for test storage... 00:25:47.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:47.482 01:19:03 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.482 01:19:03 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.482 01:19:03 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.482 01:19:03 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.482 01:19:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.482 01:19:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.482 01:19:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.482 01:19:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:47.482 01:19:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:47.482 01:19:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:47.482 01:19:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:47.482 01:19:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:47.482 01:19:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:47.482 01:19:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.482 01:19:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:47.482 01:19:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:47.482 01:19:03 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:47.482 01:19:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:49.384 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:49.384 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.384 01:19:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:49.384 Found net devices under 0000:09:00.0: cvl_0_0 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:49.385 Found net devices under 0000:09:00.1: cvl_0_1 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.385 01:19:05 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:25:49.643 00:25:49.643 --- 10.0.0.2 ping statistics --- 00:25:49.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.643 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:25:49.643 00:25:49.643 --- 10.0.0.1 ping statistics --- 00:25:49.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.643 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:49.643 01:19:05 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:51.017 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:51.017 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:51.017 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:51.017 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:51.017 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:51.017 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:51.017 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:51.017 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:51.017 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:51.017 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:51.017 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:51.017 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:51.017 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:51.017 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:51.017 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:51.017 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:51.017 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.017 01:19:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:51.017 01:19:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=70915 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:51.017 01:19:06 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 70915 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 70915 ']' 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.017 01:19:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.017 [2024-07-16 01:19:06.878538] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:25:51.017 [2024-07-16 01:19:06.878607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.017 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.017 [2024-07-16 01:19:06.942034] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.277 [2024-07-16 01:19:07.049273] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.277 [2024-07-16 01:19:07.049321] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.277 [2024-07-16 01:19:07.049335] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.277 [2024-07-16 01:19:07.049347] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.277 [2024-07-16 01:19:07.049357] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.277 [2024-07-16 01:19:07.049381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:51.277 01:19:07 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.277 01:19:07 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.277 01:19:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:51.277 01:19:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.277 [2024-07-16 01:19:07.183412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.277 01:19:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.277 01:19:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.277 ************************************ 00:25:51.277 START TEST fio_dif_1_default 00:25:51.277 ************************************ 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.277 bdev_null0 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.277 [2024-07-16 01:19:07.239677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:51.277 { 00:25:51.277 "params": { 00:25:51.277 "name": "Nvme$subsystem", 00:25:51.277 "trtype": "$TEST_TRANSPORT", 00:25:51.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.277 "adrfam": "ipv4", 00:25:51.277 "trsvcid": "$NVMF_PORT", 00:25:51.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.277 "hdgst": ${hdgst:-false}, 00:25:51.277 "ddgst": ${ddgst:-false} 00:25:51.277 }, 00:25:51.277 "method": "bdev_nvme_attach_controller" 00:25:51.277 } 00:25:51.277 EOF 00:25:51.277 )") 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:51.277 "params": { 00:25:51.277 "name": "Nvme0", 00:25:51.277 "trtype": "tcp", 00:25:51.277 "traddr": "10.0.0.2", 00:25:51.277 "adrfam": "ipv4", 00:25:51.277 "trsvcid": "4420", 00:25:51.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:51.277 "hdgst": false, 00:25:51.277 "ddgst": false 00:25:51.277 }, 00:25:51.277 "method": "bdev_nvme_attach_controller" 00:25:51.277 }' 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:51.277 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.536 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.536 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.536 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:51.536 01:19:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.536 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:51.536 fio-3.35 00:25:51.536 Starting 1 thread 00:25:51.536 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.737 00:26:03.737 filename0: (groupid=0, jobs=1): err= 0: pid=71141: Tue Jul 16 01:19:18 2024 00:26:03.737 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:26:03.737 slat (nsec): min=4382, max=33298, avg=9504.06, stdev=2617.03 00:26:03.737 clat (usec): min=40881, max=47342, avg=40996.33, stdev=407.50 00:26:03.737 lat (usec): min=40889, max=47365, avg=41005.83, stdev=407.52 00:26:03.737 clat percentiles (usec): 00:26:03.737 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:03.737 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:03.737 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:03.737 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:26:03.737 | 99.99th=[47449] 00:26:03.737 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:26:03.737 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:03.737 lat (msec) : 50=100.00% 00:26:03.737 cpu : usr=89.18%, sys=10.56%, ctx=13, majf=0, minf=241 00:26:03.737 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.737 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.737 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:03.737 00:26:03.737 Run status group 0 (all jobs): 00:26:03.737 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.737 00:26:03.737 real 0m11.172s 00:26:03.737 user 0m10.066s 00:26:03.737 sys 0m1.314s 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:03.737 ************************************ 00:26:03.737 END TEST fio_dif_1_default 00:26:03.737 ************************************ 00:26:03.737 01:19:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:03.737 01:19:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:03.737 01:19:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:03.737 01:19:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.737 01:19:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:03.737 ************************************ 00:26:03.737 START TEST fio_dif_1_multi_subsystems 00:26:03.737 ************************************ 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.737 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 bdev_null0 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 [2024-07-16 01:19:18.467091] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 bdev_null1 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:03.738 { 00:26:03.738 "params": { 00:26:03.738 "name": "Nvme$subsystem", 00:26:03.738 "trtype": "$TEST_TRANSPORT", 00:26:03.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.738 "adrfam": "ipv4", 00:26:03.738 "trsvcid": "$NVMF_PORT", 00:26:03.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.738 "hdgst": ${hdgst:-false}, 00:26:03.738 "ddgst": ${ddgst:-false} 00:26:03.738 }, 00:26:03.738 "method": "bdev_nvme_attach_controller" 00:26:03.738 } 00:26:03.738 EOF 00:26:03.738 )") 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:03.738 { 00:26:03.738 "params": { 00:26:03.738 "name": "Nvme$subsystem", 00:26:03.738 "trtype": "$TEST_TRANSPORT", 00:26:03.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.738 "adrfam": "ipv4", 00:26:03.738 "trsvcid": "$NVMF_PORT", 00:26:03.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.738 "hdgst": ${hdgst:-false}, 00:26:03.738 "ddgst": ${ddgst:-false} 00:26:03.738 }, 00:26:03.738 "method": "bdev_nvme_attach_controller" 00:26:03.738 } 00:26:03.738 EOF 00:26:03.738 )") 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:03.738 "params": { 00:26:03.738 "name": "Nvme0", 00:26:03.738 "trtype": "tcp", 00:26:03.738 "traddr": "10.0.0.2", 00:26:03.738 "adrfam": "ipv4", 00:26:03.738 "trsvcid": "4420", 00:26:03.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:03.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:03.738 "hdgst": false, 00:26:03.738 "ddgst": false 00:26:03.738 }, 00:26:03.738 "method": "bdev_nvme_attach_controller" 00:26:03.738 },{ 00:26:03.738 "params": { 00:26:03.738 "name": "Nvme1", 00:26:03.738 "trtype": "tcp", 00:26:03.738 "traddr": "10.0.0.2", 00:26:03.738 "adrfam": "ipv4", 00:26:03.738 "trsvcid": "4420", 00:26:03.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:03.738 "hdgst": false, 00:26:03.738 "ddgst": false 00:26:03.738 }, 00:26:03.738 "method": "bdev_nvme_attach_controller" 00:26:03.738 }' 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:03.738 01:19:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.738 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:03.738 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:03.738 fio-3.35 00:26:03.738 Starting 2 threads 00:26:03.738 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.708 00:26:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=72545: Tue Jul 16 01:19:29 2024 00:26:13.708 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10019msec) 00:26:13.708 slat (nsec): min=7039, max=42585, avg=9228.21, stdev=3284.74 00:26:13.708 clat (usec): min=40798, max=44313, avg=41030.07, stdev=315.12 00:26:13.708 lat (usec): min=40805, max=44353, avg=41039.30, stdev=315.59 00:26:13.708 clat percentiles (usec): 00:26:13.708 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:26:13.708 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:13.708 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:13.708 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:26:13.708 | 99.99th=[44303] 00:26:13.708 bw ( KiB/s): min= 384, max= 416, per=40.23%, avg=388.80, stdev=11.72, samples=20 00:26:13.708 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:13.708 lat (msec) : 50=100.00% 00:26:13.708 cpu : usr=94.09%, sys=5.61%, ctx=30, majf=0, minf=76 00:26:13.708 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.708 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.708 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.708 filename1: (groupid=0, jobs=1): err= 0: pid=72546: Tue Jul 16 01:19:29 2024 00:26:13.708 read: IOPS=143, BW=576KiB/s (590kB/s)(5760KiB/10005msec) 00:26:13.708 slat (nsec): min=7014, max=90489, avg=9618.16, stdev=4462.50 00:26:13.708 clat (usec): min=584, max=42057, avg=27760.21, stdev=18968.58 00:26:13.708 lat (usec): min=592, max=42072, avg=27769.83, stdev=18968.46 00:26:13.708 clat percentiles (usec): 00:26:13.708 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 775], 00:26:13.708 | 30.00th=[ 816], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:13.708 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:13.708 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:13.708 | 99.99th=[42206] 00:26:13.708 bw ( KiB/s): min= 384, max= 768, per=59.51%, avg=574.40, stdev=188.17, samples=20 00:26:13.708 iops : min= 96, max= 192, avg=143.60, stdev=47.04, samples=20 00:26:13.708 lat (usec) : 750=14.31%, 1000=18.19% 00:26:13.708 lat (msec) : 2=0.28%, 10=0.28%, 50=66.94% 00:26:13.708 cpu : usr=94.29%, sys=5.41%, ctx=13, majf=0, minf=216 00:26:13.708 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.708 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.708 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.708 00:26:13.708 Run status group 0 (all jobs): 00:26:13.708 READ: bw=965KiB/s (988kB/s), 390KiB/s-576KiB/s (399kB/s-590kB/s), io=9664KiB (9896kB), run=10005-10019msec 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 01:19:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 01:19:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:14.310 01:19:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 01:19:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 00:26:14.310 real 0m11.580s 00:26:14.310 user 0m20.415s 00:26:14.310 sys 0m1.375s 00:26:14.310 01:19:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.310 01:19:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 ************************************ 00:26:14.310 END TEST fio_dif_1_multi_subsystems 00:26:14.310 ************************************ 00:26:14.310 01:19:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:14.310 01:19:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:14.310 01:19:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:14.310 01:19:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.310 01:19:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 ************************************ 00:26:14.310 START TEST fio_dif_rand_params 00:26:14.310 ************************************ 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 bdev_null0 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.310 [2024-07-16 01:19:30.095597] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.310 { 00:26:14.310 "params": { 00:26:14.310 "name": "Nvme$subsystem", 00:26:14.310 "trtype": "$TEST_TRANSPORT", 00:26:14.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.310 "adrfam": "ipv4", 00:26:14.310 "trsvcid": "$NVMF_PORT", 00:26:14.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.310 "hdgst": ${hdgst:-false}, 00:26:14.310 "ddgst": ${ddgst:-false} 00:26:14.310 }, 00:26:14.310 "method": "bdev_nvme_attach_controller" 00:26:14.310 } 00:26:14.310 EOF 00:26:14.310 )") 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:14.310 01:19:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:14.310 "params": { 00:26:14.310 "name": "Nvme0", 00:26:14.310 "trtype": "tcp", 00:26:14.310 "traddr": "10.0.0.2", 00:26:14.310 "adrfam": "ipv4", 00:26:14.310 "trsvcid": "4420", 00:26:14.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:14.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:14.311 "hdgst": false, 00:26:14.311 "ddgst": false 00:26:14.311 }, 00:26:14.311 "method": "bdev_nvme_attach_controller" 00:26:14.311 }' 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:14.311 01:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.568 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:14.568 ... 00:26:14.568 fio-3.35 00:26:14.568 Starting 3 threads 00:26:14.568 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.124 00:26:21.124 filename0: (groupid=0, jobs=1): err= 0: pid=73947: Tue Jul 16 01:19:36 2024 00:26:21.124 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5006msec) 00:26:21.124 slat (nsec): min=4699, max=41559, avg=16299.58, stdev=4701.13 00:26:21.124 clat (usec): min=4371, max=91637, avg=13716.72, stdev=10574.19 00:26:21.124 lat (usec): min=4385, max=91655, avg=13733.02, stdev=10574.32 00:26:21.124 clat percentiles (usec): 00:26:21.124 | 1.00th=[ 5080], 5.00th=[ 6259], 10.00th=[ 7570], 20.00th=[ 8455], 00:26:21.124 | 30.00th=[ 9372], 40.00th=[10683], 50.00th=[11469], 60.00th=[12125], 00:26:21.124 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15926], 95.00th=[47973], 00:26:21.124 | 99.00th=[53740], 99.50th=[56361], 99.90th=[59507], 99.95th=[91751], 00:26:21.124 | 99.99th=[91751] 00:26:21.124 bw ( KiB/s): min=17920, max=38476, per=33.50%, avg=27911.60, stdev=6645.28, samples=10 00:26:21.124 iops : min= 140, max= 300, avg=218.00, stdev=51.81, samples=10 00:26:21.124 lat (msec) : 10=34.13%, 20=58.83%, 50=4.12%, 100=2.93% 00:26:21.124 cpu : usr=87.89%, sys=9.39%, ctx=594, majf=0, minf=92 00:26:21.124 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.124 issued rwts: total=1093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.124 filename0: (groupid=0, jobs=1): err= 0: pid=73948: Tue Jul 16 01:19:36 2024 00:26:21.124 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(126MiB/5007msec) 00:26:21.124 slat (nsec): min=4610, max=48349, avg=13605.56, stdev=2253.07 00:26:21.124 clat (usec): min=4756, max=92216, avg=14939.75, stdev=12416.90 00:26:21.124 lat (usec): min=4769, max=92230, avg=14953.36, stdev=12416.78 00:26:21.124 clat percentiles (usec): 00:26:21.124 | 1.00th=[ 5342], 5.00th=[ 6783], 10.00th=[ 8291], 20.00th=[ 9110], 00:26:21.124 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:26:21.124 | 70.00th=[13042], 80.00th=[13829], 90.00th=[15795], 95.00th=[49546], 00:26:21.124 | 99.00th=[55837], 99.50th=[86508], 99.90th=[90702], 99.95th=[91751], 00:26:21.124 | 99.99th=[91751] 00:26:21.124 bw ( KiB/s): min=18432, max=33024, per=30.76%, avg=25625.60, stdev=4646.48, samples=10 00:26:21.124 iops : min= 144, max= 258, avg=200.20, stdev=36.30, samples=10 00:26:21.124 lat (msec) : 10=25.30%, 20=66.04%, 50=4.18%, 100=4.48% 00:26:21.124 cpu : usr=93.75%, sys=5.83%, ctx=11, majf=0, minf=145 00:26:21.124 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.124 issued rwts: total=1004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.124 filename0: (groupid=0, jobs=1): err= 0: pid=73949: Tue Jul 16 01:19:36 2024 00:26:21.124 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(149MiB/5047msec) 00:26:21.124 slat (nsec): min=4908, max=76108, avg=13344.49, stdev=2275.06 00:26:21.124 clat (usec): min=4162, max=89427, avg=12693.33, stdev=9518.13 00:26:21.124 lat (usec): min=4174, max=89441, avg=12706.68, stdev=9518.27 00:26:21.124 clat percentiles (usec): 00:26:21.124 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 6325], 20.00th=[ 8225], 00:26:21.124 | 30.00th=[ 8848], 40.00th=[10290], 50.00th=[11207], 60.00th=[11863], 00:26:21.124 | 70.00th=[12518], 80.00th=[13566], 90.00th=[15139], 95.00th=[46400], 00:26:21.124 | 99.00th=[51643], 99.50th=[52691], 99.90th=[88605], 99.95th=[89654], 00:26:21.124 | 99.99th=[89654] 00:26:21.124 bw ( KiB/s): min=19968, max=40704, per=36.41%, avg=30336.00, stdev=5486.94, samples=10 00:26:21.124 iops : min= 156, max= 318, avg=237.00, stdev=42.87, samples=10 00:26:21.124 lat (msec) : 10=36.78%, 20=57.91%, 50=3.03%, 100=2.27% 00:26:21.124 cpu : usr=93.18%, sys=6.40%, ctx=15, majf=0, minf=105 00:26:21.124 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.124 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.124 00:26:21.124 Run status group 0 (all jobs): 00:26:21.124 READ: bw=81.4MiB/s (85.3MB/s), 25.1MiB/s-29.4MiB/s (26.3MB/s-30.9MB/s), io=411MiB (431MB), run=5006-5047msec 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 bdev_null0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 [2024-07-16 01:19:36.398144] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:21.124 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 bdev_null1 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 bdev_null2 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.125 { 00:26:21.125 "params": { 00:26:21.125 "name": "Nvme$subsystem", 00:26:21.125 "trtype": "$TEST_TRANSPORT", 00:26:21.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.125 "adrfam": "ipv4", 00:26:21.125 "trsvcid": "$NVMF_PORT", 00:26:21.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.125 "hdgst": ${hdgst:-false}, 00:26:21.125 "ddgst": ${ddgst:-false} 00:26:21.125 }, 00:26:21.125 "method": "bdev_nvme_attach_controller" 00:26:21.125 } 00:26:21.125 EOF 00:26:21.125 )") 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.125 { 00:26:21.125 "params": { 00:26:21.125 "name": "Nvme$subsystem", 00:26:21.125 "trtype": "$TEST_TRANSPORT", 00:26:21.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.125 "adrfam": "ipv4", 00:26:21.125 "trsvcid": "$NVMF_PORT", 00:26:21.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.125 "hdgst": ${hdgst:-false}, 00:26:21.125 "ddgst": ${ddgst:-false} 00:26:21.125 }, 00:26:21.125 "method": "bdev_nvme_attach_controller" 00:26:21.125 } 00:26:21.125 EOF 00:26:21.125 )") 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.125 { 00:26:21.125 "params": { 00:26:21.125 "name": "Nvme$subsystem", 00:26:21.125 "trtype": "$TEST_TRANSPORT", 00:26:21.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.125 "adrfam": "ipv4", 00:26:21.125 "trsvcid": "$NVMF_PORT", 00:26:21.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.125 "hdgst": ${hdgst:-false}, 00:26:21.125 "ddgst": ${ddgst:-false} 00:26:21.125 }, 00:26:21.125 "method": "bdev_nvme_attach_controller" 00:26:21.125 } 00:26:21.125 EOF 00:26:21.125 )") 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:21.125 "params": { 00:26:21.125 "name": "Nvme0", 00:26:21.125 "trtype": "tcp", 00:26:21.125 "traddr": "10.0.0.2", 00:26:21.125 "adrfam": "ipv4", 00:26:21.125 "trsvcid": "4420", 00:26:21.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:21.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:21.125 "hdgst": false, 00:26:21.125 "ddgst": false 00:26:21.125 }, 00:26:21.125 "method": "bdev_nvme_attach_controller" 00:26:21.125 },{ 00:26:21.125 "params": { 00:26:21.125 "name": "Nvme1", 00:26:21.125 "trtype": "tcp", 00:26:21.125 "traddr": "10.0.0.2", 00:26:21.125 "adrfam": "ipv4", 00:26:21.125 "trsvcid": "4420", 00:26:21.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.125 "hdgst": false, 00:26:21.125 "ddgst": false 00:26:21.125 }, 00:26:21.125 "method": "bdev_nvme_attach_controller" 00:26:21.125 },{ 00:26:21.125 "params": { 00:26:21.125 "name": "Nvme2", 00:26:21.125 "trtype": "tcp", 00:26:21.125 "traddr": "10.0.0.2", 00:26:21.125 "adrfam": "ipv4", 00:26:21.125 "trsvcid": "4420", 00:26:21.125 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:21.125 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:21.125 "hdgst": false, 00:26:21.125 "ddgst": false 00:26:21.125 }, 00:26:21.125 "method": "bdev_nvme_attach_controller" 00:26:21.125 }' 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:21.125 01:19:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.125 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.125 ... 00:26:21.126 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.126 ... 00:26:21.126 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.126 ... 00:26:21.126 fio-3.35 00:26:21.126 Starting 24 threads 00:26:21.126 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.350 00:26:33.350 filename0: (groupid=0, jobs=1): err= 0: pid=74816: Tue Jul 16 01:19:47 2024 00:26:33.350 read: IOPS=86, BW=347KiB/s (355kB/s)(3520KiB/10147msec) 00:26:33.350 slat (nsec): min=6474, max=94397, avg=18139.78, stdev=19672.60 00:26:33.350 clat (msec): min=3, max=263, avg=184.01, stdev=63.60 00:26:33.350 lat (msec): min=3, max=263, avg=184.03, stdev=63.60 00:26:33.350 clat percentiles (msec): 00:26:33.350 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 102], 20.00th=[ 159], 00:26:33.350 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 199], 60.00th=[ 213], 00:26:33.350 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 251], 95.00th=[ 255], 00:26:33.350 | 99.00th=[ 264], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:26:33.350 | 99.99th=[ 264] 00:26:33.350 bw ( KiB/s): min= 256, max= 896, per=5.71%, avg=345.60, stdev=144.92, samples=20 00:26:33.350 iops : min= 64, max= 224, avg=86.40, stdev=36.23, samples=20 00:26:33.350 lat (msec) : 4=4.66%, 10=2.61%, 100=1.82%, 250=80.45%, 500=10.45% 00:26:33.350 cpu : usr=98.01%, sys=1.60%, ctx=25, majf=0, minf=62 00:26:33.350 IO depths : 1=0.7%, 2=4.2%, 4=16.7%, 8=66.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.350 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.350 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.350 filename0: (groupid=0, jobs=1): err= 0: pid=74817: Tue Jul 16 01:19:47 2024 00:26:33.350 read: IOPS=76, BW=306KiB/s (313kB/s)(3096KiB/10127msec) 00:26:33.350 slat (usec): min=7, max=144, avg=19.24, stdev=17.48 00:26:33.350 clat (msec): min=122, max=303, avg=208.59, stdev=35.97 00:26:33.350 lat (msec): min=122, max=303, avg=208.61, stdev=35.97 00:26:33.350 clat percentiles (msec): 00:26:33.350 | 1.00th=[ 123], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 176], 00:26:33.350 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 207], 60.00th=[ 224], 00:26:33.350 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 257], 95.00th=[ 259], 00:26:33.350 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:26:33.350 | 99.99th=[ 305] 00:26:33.350 bw ( KiB/s): min= 256, max= 384, per=5.01%, avg=303.20, stdev=55.30, samples=20 00:26:33.350 iops : min= 64, max= 96, avg=75.80, stdev=13.82, samples=20 00:26:33.350 lat (msec) : 250=84.50%, 500=15.50% 00:26:33.350 cpu : usr=97.83%, sys=1.50%, ctx=33, majf=0, minf=71 00:26:33.350 IO depths : 1=1.2%, 2=7.0%, 4=23.6%, 8=56.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.350 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.350 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.350 filename0: (groupid=0, jobs=1): err= 0: pid=74818: Tue Jul 16 01:19:47 2024 00:26:33.350 read: IOPS=71, BW=288KiB/s (294kB/s)(2912KiB/10127msec) 00:26:33.350 slat (nsec): min=8087, max=98921, avg=27366.20, stdev=27485.94 00:26:33.350 clat (msec): min=148, max=374, avg=222.07, stdev=37.11 00:26:33.350 lat (msec): min=149, max=374, avg=222.10, stdev=37.12 00:26:33.350 clat percentiles (msec): 00:26:33.350 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 192], 00:26:33.350 | 30.00th=[ 201], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 226], 00:26:33.350 | 70.00th=[ 232], 80.00th=[ 253], 90.00th=[ 268], 95.00th=[ 288], 00:26:33.350 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:26:33.350 | 99.99th=[ 376] 00:26:33.350 bw ( KiB/s): min= 224, max= 384, per=4.70%, avg=284.75, stdev=40.78, samples=20 00:26:33.350 iops : min= 56, max= 96, avg=71.15, stdev=10.23, samples=20 00:26:33.350 lat (msec) : 250=78.02%, 500=21.98% 00:26:33.350 cpu : usr=97.92%, sys=1.36%, ctx=165, majf=0, minf=40 00:26:33.350 IO depths : 1=0.7%, 2=2.3%, 4=11.0%, 8=74.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.350 complete : 0=0.0%, 4=90.1%, 8=4.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.350 issued rwts: total=728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.350 filename0: (groupid=0, jobs=1): err= 0: pid=74819: Tue Jul 16 01:19:47 2024 00:26:33.350 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10104msec) 00:26:33.350 slat (nsec): min=8779, max=98715, avg=44128.29, stdev=22108.90 00:26:33.350 clat (msec): min=179, max=449, avg=305.76, stdev=47.71 00:26:33.350 lat (msec): min=179, max=449, avg=305.80, stdev=47.70 00:26:33.350 clat percentiles (msec): 00:26:33.350 | 1.00th=[ 192], 5.00th=[ 228], 10.00th=[ 228], 20.00th=[ 259], 00:26:33.350 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 338], 00:26:33.350 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 363], 00:26:33.350 | 99.00th=[ 388], 99.50th=[ 439], 99.90th=[ 451], 99.95th=[ 451], 00:26:33.350 | 99.99th=[ 451] 00:26:33.350 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=64.34, samples=20 00:26:33.350 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:26:33.350 lat (msec) : 250=10.61%, 500=89.39% 00:26:33.350 cpu : usr=98.21%, sys=1.32%, ctx=19, majf=0, minf=40 00:26:33.350 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.351 filename0: (groupid=0, jobs=1): err= 0: pid=74820: Tue Jul 16 01:19:47 2024 00:26:33.351 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10109msec) 00:26:33.351 slat (usec): min=8, max=115, avg=44.44, stdev=28.80 00:26:33.351 clat (msec): min=136, max=439, avg=305.94, stdev=49.48 00:26:33.351 lat (msec): min=136, max=439, avg=305.99, stdev=49.46 00:26:33.351 clat percentiles (msec): 00:26:33.351 | 1.00th=[ 226], 5.00th=[ 228], 10.00th=[ 247], 20.00th=[ 259], 00:26:33.351 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 338], 00:26:33.351 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 376], 00:26:33.351 | 99.00th=[ 435], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:26:33.351 | 99.99th=[ 439] 00:26:33.351 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=62.85, samples=20 00:26:33.351 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:26:33.351 lat (msec) : 250=10.23%, 500=89.77% 00:26:33.351 cpu : usr=97.85%, sys=1.56%, ctx=26, majf=0, minf=46 00:26:33.351 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:26:33.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.351 filename0: (groupid=0, jobs=1): err= 0: pid=74821: Tue Jul 16 01:19:47 2024 00:26:33.351 read: IOPS=52, BW=208KiB/s (213kB/s)(2104KiB/10111msec) 00:26:33.351 slat (usec): min=8, max=101, avg=32.55, stdev=26.88 00:26:33.351 clat (msec): min=168, max=501, avg=306.89, stdev=60.45 00:26:33.351 lat (msec): min=168, max=501, avg=306.92, stdev=60.44 00:26:33.351 clat percentiles (msec): 00:26:33.351 | 1.00th=[ 194], 5.00th=[ 226], 10.00th=[ 243], 20.00th=[ 259], 00:26:33.351 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 296], 60.00th=[ 334], 00:26:33.351 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:26:33.351 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:26:33.351 | 99.99th=[ 502] 00:26:33.351 bw ( KiB/s): min= 128, max= 384, per=3.54%, avg=214.74, stdev=74.15, samples=19 00:26:33.351 iops : min= 32, max= 96, avg=53.68, stdev=18.54, samples=19 00:26:33.351 lat (msec) : 250=12.55%, 500=84.03%, 750=3.42% 00:26:33.351 cpu : usr=97.85%, sys=1.66%, ctx=33, majf=0, minf=41 00:26:33.351 IO depths : 1=5.3%, 2=11.6%, 4=25.1%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:33.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.351 filename0: (groupid=0, jobs=1): err= 0: pid=74822: Tue Jul 16 01:19:47 2024 00:26:33.351 read: IOPS=63, BW=254KiB/s (260kB/s)(2568KiB/10127msec) 00:26:33.351 slat (usec): min=8, max=124, avg=39.47, stdev=30.47 00:26:33.351 clat (msec): min=149, max=400, avg=251.98, stdev=43.88 00:26:33.351 lat (msec): min=149, max=400, avg=252.02, stdev=43.90 00:26:33.351 clat percentiles (msec): 00:26:33.351 | 1.00th=[ 159], 5.00th=[ 188], 10.00th=[ 211], 20.00th=[ 218], 00:26:33.351 | 30.00th=[ 226], 40.00th=[ 232], 50.00th=[ 253], 60.00th=[ 257], 00:26:33.351 | 70.00th=[ 268], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 342], 00:26:33.351 | 99.00th=[ 376], 99.50th=[ 397], 99.90th=[ 401], 99.95th=[ 401], 00:26:33.351 | 99.99th=[ 401] 00:26:33.351 bw ( KiB/s): min= 144, max= 336, per=4.13%, avg=250.40, stdev=44.13, samples=20 00:26:33.351 iops : min= 36, max= 84, avg=62.60, stdev=11.03, samples=20 00:26:33.351 lat (msec) : 250=46.42%, 500=53.58% 00:26:33.351 cpu : usr=97.42%, sys=1.78%, ctx=85, majf=0, minf=45 00:26:33.351 IO depths : 1=2.2%, 2=6.2%, 4=18.2%, 8=62.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:33.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.351 filename0: (groupid=0, jobs=1): err= 0: pid=74823: Tue Jul 16 01:19:47 2024 00:26:33.351 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10127msec) 00:26:33.351 slat (usec): min=8, max=104, avg=39.00, stdev=30.73 00:26:33.351 clat (msec): min=122, max=393, avg=246.52, stdev=45.27 00:26:33.351 lat (msec): min=122, max=393, avg=246.55, stdev=45.29 00:26:33.351 clat percentiles (msec): 00:26:33.351 | 1.00th=[ 124], 5.00th=[ 169], 10.00th=[ 190], 20.00th=[ 199], 00:26:33.351 | 30.00th=[ 228], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 257], 00:26:33.351 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 321], 00:26:33.351 | 99.00th=[ 355], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:26:33.351 | 99.99th=[ 393] 00:26:33.351 bw ( KiB/s): min= 128, max= 384, per=4.22%, avg=256.00, stdev=66.28, samples=20 00:26:33.351 iops : min= 32, max= 96, avg=64.00, stdev=16.57, samples=20 00:26:33.351 lat (msec) : 250=50.30%, 500=49.70% 00:26:33.351 cpu : usr=98.02%, sys=1.51%, ctx=14, majf=0, minf=33 00:26:33.351 IO depths : 1=1.8%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:33.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.351 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.351 filename1: (groupid=0, jobs=1): err= 0: pid=74824: Tue Jul 16 01:19:47 2024 00:26:33.351 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10107msec) 00:26:33.351 slat (usec): min=5, max=110, avg=54.28, stdev=24.20 00:26:33.351 clat (msec): min=225, max=375, avg=305.80, stdev=43.92 00:26:33.351 lat (msec): min=225, max=375, avg=305.85, stdev=43.91 00:26:33.351 clat percentiles (msec): 00:26:33.351 | 1.00th=[ 226], 5.00th=[ 228], 10.00th=[ 253], 20.00th=[ 262], 00:26:33.351 | 30.00th=[ 275], 40.00th=[ 292], 50.00th=[ 305], 60.00th=[ 338], 00:26:33.351 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 363], 00:26:33.352 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:26:33.352 | 99.99th=[ 376] 00:26:33.352 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=64.34, samples=20 00:26:33.352 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:26:33.352 lat (msec) : 250=9.09%, 500=90.91% 00:26:33.352 cpu : usr=98.02%, sys=1.45%, ctx=50, majf=0, minf=48 00:26:33.352 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:33.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.352 filename1: (groupid=0, jobs=1): err= 0: pid=74825: Tue Jul 16 01:19:47 2024 00:26:33.352 read: IOPS=87, BW=349KiB/s (357kB/s)(3536KiB/10145msec) 00:26:33.352 slat (usec): min=7, max=127, avg=14.50, stdev=12.93 00:26:33.352 clat (msec): min=3, max=343, avg=183.18, stdev=67.07 00:26:33.352 lat (msec): min=3, max=343, avg=183.19, stdev=67.06 00:26:33.352 clat percentiles (msec): 00:26:33.352 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 71], 20.00th=[ 159], 00:26:33.352 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 199], 60.00th=[ 209], 00:26:33.352 | 70.00th=[ 220], 80.00th=[ 228], 90.00th=[ 253], 95.00th=[ 259], 00:26:33.352 | 99.00th=[ 279], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:26:33.352 | 99.99th=[ 342] 00:26:33.352 bw ( KiB/s): min= 256, max= 992, per=5.74%, avg=347.20, stdev=160.34, samples=20 00:26:33.352 iops : min= 64, max= 248, avg=86.80, stdev=40.09, samples=20 00:26:33.352 lat (msec) : 4=5.20%, 10=2.04%, 50=0.79%, 100=3.51%, 250=76.92% 00:26:33.352 lat (msec) : 500=11.54% 00:26:33.352 cpu : usr=98.20%, sys=1.41%, ctx=15, majf=0, minf=53 00:26:33.352 IO depths : 1=0.3%, 2=0.7%, 4=6.3%, 8=80.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:33.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 complete : 0=0.0%, 4=88.9%, 8=5.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.352 filename1: (groupid=0, jobs=1): err= 0: pid=74826: Tue Jul 16 01:19:47 2024 00:26:33.352 read: IOPS=74, BW=298KiB/s (305kB/s)(3016KiB/10127msec) 00:26:33.352 slat (usec): min=8, max=109, avg=17.10, stdev=17.33 00:26:33.352 clat (msec): min=150, max=340, avg=214.42, stdev=30.87 00:26:33.352 lat (msec): min=150, max=340, avg=214.44, stdev=30.87 00:26:33.352 clat percentiles (msec): 00:26:33.352 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 180], 00:26:33.352 | 30.00th=[ 201], 40.00th=[ 211], 50.00th=[ 220], 60.00th=[ 224], 00:26:33.352 | 70.00th=[ 228], 80.00th=[ 234], 90.00th=[ 253], 95.00th=[ 259], 00:26:33.352 | 99.00th=[ 317], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:26:33.352 | 99.99th=[ 342] 00:26:33.352 bw ( KiB/s): min= 224, max= 384, per=4.88%, avg=295.20, stdev=46.57, samples=20 00:26:33.352 iops : min= 56, max= 96, avg=73.80, stdev=11.64, samples=20 00:26:33.352 lat (msec) : 250=87.53%, 500=12.47% 00:26:33.352 cpu : usr=98.35%, sys=1.24%, ctx=23, majf=0, minf=69 00:26:33.352 IO depths : 1=1.5%, 2=3.4%, 4=12.1%, 8=71.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:33.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 complete : 0=0.0%, 4=90.3%, 8=4.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.352 filename1: (groupid=0, jobs=1): err= 0: pid=74827: Tue Jul 16 01:19:47 2024 00:26:33.352 read: IOPS=66, BW=266KiB/s (273kB/s)(2696KiB/10127msec) 00:26:33.352 slat (usec): min=8, max=119, avg=21.56, stdev=16.88 00:26:33.352 clat (msec): min=122, max=360, avg=239.98, stdev=44.24 00:26:33.352 lat (msec): min=122, max=360, avg=240.00, stdev=44.24 00:26:33.352 clat percentiles (msec): 00:26:33.352 | 1.00th=[ 124], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 213], 00:26:33.352 | 30.00th=[ 224], 40.00th=[ 228], 50.00th=[ 243], 60.00th=[ 253], 00:26:33.352 | 70.00th=[ 259], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 305], 00:26:33.352 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:26:33.352 | 99.99th=[ 359] 00:26:33.352 bw ( KiB/s): min= 144, max= 368, per=4.35%, avg=263.20, stdev=56.98, samples=20 00:26:33.352 iops : min= 36, max= 92, avg=65.80, stdev=14.24, samples=20 00:26:33.352 lat (msec) : 250=56.97%, 500=43.03% 00:26:33.352 cpu : usr=97.07%, sys=1.96%, ctx=122, majf=0, minf=43 00:26:33.352 IO depths : 1=2.2%, 2=6.2%, 4=18.1%, 8=63.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:33.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.352 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.352 filename1: (groupid=0, jobs=1): err= 0: pid=74828: Tue Jul 16 01:19:47 2024 00:26:33.352 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10107msec) 00:26:33.352 slat (usec): min=9, max=115, avg=27.89, stdev=14.23 00:26:33.352 clat (msec): min=159, max=451, avg=306.03, stdev=52.98 00:26:33.352 lat (msec): min=159, max=451, avg=306.06, stdev=52.99 00:26:33.352 clat percentiles (msec): 00:26:33.352 | 1.00th=[ 182], 5.00th=[ 228], 10.00th=[ 228], 20.00th=[ 259], 00:26:33.352 | 30.00th=[ 271], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 338], 00:26:33.352 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 376], 00:26:33.352 | 99.00th=[ 439], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:26:33.352 | 99.99th=[ 451] 00:26:33.352 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=61.33, samples=20 00:26:33.352 iops : min= 32, max= 64, avg=51.20, stdev=15.33, samples=20 00:26:33.352 lat (msec) : 250=12.88%, 500=87.12% 00:26:33.352 cpu : usr=98.16%, sys=1.36%, ctx=24, majf=0, minf=36 00:26:33.352 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:33.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.353 filename1: (groupid=0, jobs=1): err= 0: pid=74829: Tue Jul 16 01:19:47 2024 00:26:33.353 read: IOPS=72, BW=291KiB/s (298kB/s)(2952KiB/10128msec) 00:26:33.353 slat (usec): min=8, max=143, avg=22.49, stdev=22.86 00:26:33.353 clat (msec): min=122, max=360, avg=219.08, stdev=39.57 00:26:33.353 lat (msec): min=122, max=360, avg=219.10, stdev=39.57 00:26:33.353 clat percentiles (msec): 00:26:33.353 | 1.00th=[ 124], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 180], 00:26:33.353 | 30.00th=[ 201], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 226], 00:26:33.353 | 70.00th=[ 234], 80.00th=[ 255], 90.00th=[ 271], 95.00th=[ 284], 00:26:33.353 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:26:33.353 | 99.99th=[ 359] 00:26:33.353 bw ( KiB/s): min= 128, max= 384, per=4.76%, avg=288.80, stdev=64.94, samples=20 00:26:33.353 iops : min= 32, max= 96, avg=72.20, stdev=16.23, samples=20 00:26:33.353 lat (msec) : 250=76.42%, 500=23.58% 00:26:33.353 cpu : usr=97.95%, sys=1.52%, ctx=46, majf=0, minf=44 00:26:33.353 IO depths : 1=3.0%, 2=6.5%, 4=16.7%, 8=64.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:33.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 complete : 0=0.0%, 4=91.6%, 8=2.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.353 filename1: (groupid=0, jobs=1): err= 0: pid=74830: Tue Jul 16 01:19:47 2024 00:26:33.353 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10108msec) 00:26:33.353 slat (nsec): min=8258, max=57795, avg=25243.47, stdev=9699.95 00:26:33.353 clat (msec): min=137, max=451, avg=306.07, stdev=53.61 00:26:33.353 lat (msec): min=137, max=451, avg=306.10, stdev=53.62 00:26:33.353 clat percentiles (msec): 00:26:33.353 | 1.00th=[ 180], 5.00th=[ 226], 10.00th=[ 228], 20.00th=[ 257], 00:26:33.353 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 338], 00:26:33.353 | 70.00th=[ 342], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 376], 00:26:33.353 | 99.00th=[ 439], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:26:33.353 | 99.99th=[ 451] 00:26:33.353 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=61.33, samples=20 00:26:33.353 iops : min= 32, max= 64, avg=51.20, stdev=15.33, samples=20 00:26:33.353 lat (msec) : 250=12.50%, 500=87.50% 00:26:33.353 cpu : usr=98.02%, sys=1.35%, ctx=29, majf=0, minf=39 00:26:33.353 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:26:33.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.353 filename1: (groupid=0, jobs=1): err= 0: pid=74831: Tue Jul 16 01:19:47 2024 00:26:33.353 read: IOPS=55, BW=221KiB/s (227kB/s)(2240KiB/10113msec) 00:26:33.353 slat (usec): min=8, max=181, avg=39.99, stdev=28.55 00:26:33.353 clat (msec): min=166, max=451, avg=288.61, stdev=54.70 00:26:33.353 lat (msec): min=166, max=451, avg=288.65, stdev=54.69 00:26:33.353 clat percentiles (msec): 00:26:33.353 | 1.00th=[ 180], 5.00th=[ 199], 10.00th=[ 226], 20.00th=[ 243], 00:26:33.353 | 30.00th=[ 257], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 292], 00:26:33.353 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 363], 00:26:33.353 | 99.00th=[ 443], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:26:33.353 | 99.99th=[ 451] 00:26:33.353 bw ( KiB/s): min= 128, max= 368, per=3.59%, avg=217.60, stdev=77.94, samples=20 00:26:33.353 iops : min= 32, max= 92, avg=54.40, stdev=19.48, samples=20 00:26:33.353 lat (msec) : 250=24.29%, 500=75.71% 00:26:33.353 cpu : usr=97.46%, sys=1.84%, ctx=54, majf=0, minf=42 00:26:33.353 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:26:33.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.353 filename2: (groupid=0, jobs=1): err= 0: pid=74832: Tue Jul 16 01:19:47 2024 00:26:33.353 read: IOPS=68, BW=273KiB/s (279kB/s)(2760KiB/10127msec) 00:26:33.353 slat (usec): min=7, max=141, avg=27.59, stdev=26.27 00:26:33.353 clat (msec): min=122, max=385, avg=234.33, stdev=47.02 00:26:33.353 lat (msec): min=122, max=385, avg=234.35, stdev=47.03 00:26:33.353 clat percentiles (msec): 00:26:33.353 | 1.00th=[ 123], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 207], 00:26:33.353 | 30.00th=[ 220], 40.00th=[ 224], 50.00th=[ 230], 60.00th=[ 236], 00:26:33.353 | 70.00th=[ 251], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 338], 00:26:33.353 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 384], 99.95th=[ 384], 00:26:33.353 | 99.99th=[ 384] 00:26:33.353 bw ( KiB/s): min= 144, max= 368, per=4.45%, avg=269.55, stdev=53.78, samples=20 00:26:33.353 iops : min= 36, max= 92, avg=67.35, stdev=13.46, samples=20 00:26:33.353 lat (msec) : 250=65.51%, 500=34.49% 00:26:33.353 cpu : usr=98.07%, sys=1.38%, ctx=15, majf=0, minf=35 00:26:33.353 IO depths : 1=1.7%, 2=5.2%, 4=16.1%, 8=65.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:33.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.353 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.353 filename2: (groupid=0, jobs=1): err= 0: pid=74833: Tue Jul 16 01:19:47 2024 00:26:33.353 read: IOPS=63, BW=255KiB/s (261kB/s)(2584KiB/10127msec) 00:26:33.353 slat (usec): min=8, max=105, avg=42.96, stdev=31.06 00:26:33.353 clat (msec): min=123, max=428, avg=250.38, stdev=46.04 00:26:33.353 lat (msec): min=123, max=428, avg=250.42, stdev=46.06 00:26:33.353 clat percentiles (msec): 00:26:33.353 | 1.00th=[ 124], 5.00th=[ 169], 10.00th=[ 194], 20.00th=[ 218], 00:26:33.353 | 30.00th=[ 226], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 259], 00:26:33.354 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 305], 95.00th=[ 330], 00:26:33.354 | 99.00th=[ 334], 99.50th=[ 393], 99.90th=[ 430], 99.95th=[ 430], 00:26:33.354 | 99.99th=[ 430] 00:26:33.354 bw ( KiB/s): min= 128, max= 384, per=4.15%, avg=252.00, stdev=65.02, samples=20 00:26:33.354 iops : min= 32, max= 96, avg=63.00, stdev=16.25, samples=20 00:26:33.354 lat (msec) : 250=43.34%, 500=56.66% 00:26:33.354 cpu : usr=98.07%, sys=1.43%, ctx=15, majf=0, minf=40 00:26:33.354 IO depths : 1=2.8%, 2=7.7%, 4=21.1%, 8=58.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:26:33.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.354 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.354 filename2: (groupid=0, jobs=1): err= 0: pid=74834: Tue Jul 16 01:19:47 2024 00:26:33.354 read: IOPS=64, BW=258KiB/s (264kB/s)(2608KiB/10127msec) 00:26:33.354 slat (usec): min=8, max=103, avg=45.58, stdev=28.97 00:26:33.354 clat (msec): min=122, max=445, avg=247.81, stdev=52.11 00:26:33.354 lat (msec): min=122, max=445, avg=247.86, stdev=52.12 00:26:33.354 clat percentiles (msec): 00:26:33.354 | 1.00th=[ 123], 5.00th=[ 153], 10.00th=[ 176], 20.00th=[ 218], 00:26:33.354 | 30.00th=[ 226], 40.00th=[ 234], 50.00th=[ 247], 60.00th=[ 257], 00:26:33.354 | 70.00th=[ 268], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 342], 00:26:33.354 | 99.00th=[ 397], 99.50th=[ 435], 99.90th=[ 447], 99.95th=[ 447], 00:26:33.354 | 99.99th=[ 447] 00:26:33.354 bw ( KiB/s): min= 128, max= 368, per=4.20%, avg=254.40, stdev=49.76, samples=20 00:26:33.354 iops : min= 32, max= 92, avg=63.60, stdev=12.44, samples=20 00:26:33.354 lat (msec) : 250=51.23%, 500=48.77% 00:26:33.354 cpu : usr=98.07%, sys=1.46%, ctx=19, majf=0, minf=40 00:26:33.354 IO depths : 1=2.0%, 2=6.3%, 4=19.0%, 8=62.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:33.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.354 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.354 filename2: (groupid=0, jobs=1): err= 0: pid=74835: Tue Jul 16 01:19:47 2024 00:26:33.354 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10111msec) 00:26:33.354 slat (nsec): min=5836, max=96427, avg=27156.42, stdev=16535.91 00:26:33.354 clat (msec): min=136, max=450, avg=306.15, stdev=47.97 00:26:33.354 lat (msec): min=136, max=450, avg=306.18, stdev=47.97 00:26:33.354 clat percentiles (msec): 00:26:33.354 | 1.00th=[ 186], 5.00th=[ 228], 10.00th=[ 253], 20.00th=[ 262], 00:26:33.354 | 30.00th=[ 275], 40.00th=[ 292], 50.00th=[ 305], 60.00th=[ 338], 00:26:33.354 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 363], 00:26:33.354 | 99.00th=[ 422], 99.50th=[ 447], 99.90th=[ 451], 99.95th=[ 451], 00:26:33.354 | 99.99th=[ 451] 00:26:33.354 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=62.85, samples=20 00:26:33.354 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:26:33.354 lat (msec) : 250=9.85%, 500=90.15% 00:26:33.354 cpu : usr=98.14%, sys=1.36%, ctx=48, majf=0, minf=42 00:26:33.354 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:33.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.354 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.354 filename2: (groupid=0, jobs=1): err= 0: pid=74836: Tue Jul 16 01:19:47 2024 00:26:33.354 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10117msec) 00:26:33.354 slat (nsec): min=3970, max=82732, avg=25896.36, stdev=8716.29 00:26:33.354 clat (msec): min=181, max=450, avg=306.16, stdev=47.35 00:26:33.354 lat (msec): min=181, max=450, avg=306.18, stdev=47.35 00:26:33.354 clat percentiles (msec): 00:26:33.354 | 1.00th=[ 228], 5.00th=[ 228], 10.00th=[ 247], 20.00th=[ 262], 00:26:33.354 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 338], 00:26:33.354 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 363], 00:26:33.354 | 99.00th=[ 426], 99.50th=[ 439], 99.90th=[ 451], 99.95th=[ 451], 00:26:33.354 | 99.99th=[ 451] 00:26:33.354 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=61.33, samples=20 00:26:33.354 iops : min= 32, max= 64, avg=51.20, stdev=15.33, samples=20 00:26:33.354 lat (msec) : 250=10.23%, 500=89.77% 00:26:33.354 cpu : usr=97.41%, sys=1.83%, ctx=20, majf=0, minf=40 00:26:33.354 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:26:33.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.354 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.354 filename2: (groupid=0, jobs=1): err= 0: pid=74837: Tue Jul 16 01:19:47 2024 00:26:33.354 read: IOPS=79, BW=316KiB/s (324kB/s)(3200KiB/10119msec) 00:26:33.354 slat (nsec): min=7574, max=48736, avg=13772.10, stdev=6838.43 00:26:33.354 clat (msec): min=101, max=313, avg=200.61, stdev=40.75 00:26:33.354 lat (msec): min=101, max=313, avg=200.62, stdev=40.75 00:26:33.354 clat percentiles (msec): 00:26:33.354 | 1.00th=[ 103], 5.00th=[ 136], 10.00th=[ 153], 20.00th=[ 165], 00:26:33.354 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 213], 00:26:33.354 | 70.00th=[ 228], 80.00th=[ 243], 90.00th=[ 257], 95.00th=[ 259], 00:26:33.354 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 313], 99.95th=[ 313], 00:26:33.354 | 99.99th=[ 313] 00:26:33.354 bw ( KiB/s): min= 240, max= 384, per=5.18%, avg=313.55, stdev=65.48, samples=20 00:26:33.354 iops : min= 60, max= 96, avg=78.35, stdev=16.33, samples=20 00:26:33.354 lat (msec) : 250=84.00%, 500=16.00% 00:26:33.354 cpu : usr=98.06%, sys=1.55%, ctx=33, majf=0, minf=45 00:26:33.354 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:33.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.354 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.354 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.354 filename2: (groupid=0, jobs=1): err= 0: pid=74838: Tue Jul 16 01:19:47 2024 00:26:33.355 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10111msec) 00:26:33.355 slat (usec): min=8, max=109, avg=59.89, stdev=22.22 00:26:33.355 clat (msec): min=225, max=425, avg=305.84, stdev=48.00 00:26:33.355 lat (msec): min=225, max=425, avg=305.90, stdev=47.99 00:26:33.355 clat percentiles (msec): 00:26:33.355 | 1.00th=[ 226], 5.00th=[ 228], 10.00th=[ 247], 20.00th=[ 262], 00:26:33.355 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 330], 00:26:33.355 | 70.00th=[ 347], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:26:33.355 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:26:33.355 | 99.99th=[ 426] 00:26:33.355 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=204.80, stdev=64.34, samples=20 00:26:33.355 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:26:33.355 lat (msec) : 250=12.12%, 500=87.88% 00:26:33.355 cpu : usr=97.36%, sys=1.83%, ctx=54, majf=0, minf=41 00:26:33.355 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:33.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.355 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.355 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.355 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.355 filename2: (groupid=0, jobs=1): err= 0: pid=74839: Tue Jul 16 01:19:47 2024 00:26:33.355 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10104msec) 00:26:33.355 slat (usec): min=8, max=105, avg=21.54, stdev=17.25 00:26:33.355 clat (msec): min=196, max=501, avg=312.97, stdev=60.88 00:26:33.355 lat (msec): min=196, max=501, avg=312.99, stdev=60.88 00:26:33.355 clat percentiles (msec): 00:26:33.355 | 1.00th=[ 203], 5.00th=[ 226], 10.00th=[ 241], 20.00th=[ 262], 00:26:33.355 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 334], 00:26:33.355 | 70.00th=[ 342], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 443], 00:26:33.355 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:26:33.355 | 99.99th=[ 502] 00:26:33.355 bw ( KiB/s): min= 128, max= 384, per=3.44%, avg=208.84, stdev=73.80, samples=19 00:26:33.355 iops : min= 32, max= 96, avg=52.21, stdev=18.45, samples=19 00:26:33.355 lat (msec) : 250=13.67%, 500=83.20%, 750=3.12% 00:26:33.355 cpu : usr=98.23%, sys=1.36%, ctx=23, majf=0, minf=48 00:26:33.355 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:26:33.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.355 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.355 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.355 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.355 00:26:33.355 Run status group 0 (all jobs): 00:26:33.355 READ: bw=6047KiB/s (6192kB/s), 203KiB/s-349KiB/s (208kB/s-357kB/s), io=59.9MiB (62.8MB), run=10104-10147msec 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:33.355 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 bdev_null0 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 [2024-07-16 01:19:48.204420] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 bdev_null1 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.356 { 00:26:33.356 "params": { 00:26:33.356 "name": "Nvme$subsystem", 00:26:33.356 "trtype": "$TEST_TRANSPORT", 00:26:33.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.356 "adrfam": "ipv4", 00:26:33.356 "trsvcid": "$NVMF_PORT", 00:26:33.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.356 "hdgst": ${hdgst:-false}, 00:26:33.356 "ddgst": ${ddgst:-false} 00:26:33.356 }, 00:26:33.356 "method": "bdev_nvme_attach_controller" 00:26:33.356 } 00:26:33.356 EOF 00:26:33.356 )") 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.356 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.356 { 00:26:33.356 "params": { 00:26:33.356 "name": "Nvme$subsystem", 00:26:33.357 "trtype": "$TEST_TRANSPORT", 00:26:33.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.357 "adrfam": "ipv4", 00:26:33.357 "trsvcid": "$NVMF_PORT", 00:26:33.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.357 "hdgst": ${hdgst:-false}, 00:26:33.357 "ddgst": ${ddgst:-false} 00:26:33.357 }, 00:26:33.357 "method": "bdev_nvme_attach_controller" 00:26:33.357 } 00:26:33.357 EOF 00:26:33.357 )") 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:33.357 "params": { 00:26:33.357 "name": "Nvme0", 00:26:33.357 "trtype": "tcp", 00:26:33.357 "traddr": "10.0.0.2", 00:26:33.357 "adrfam": "ipv4", 00:26:33.357 "trsvcid": "4420", 00:26:33.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:33.357 "hdgst": false, 00:26:33.357 "ddgst": false 00:26:33.357 }, 00:26:33.357 "method": "bdev_nvme_attach_controller" 00:26:33.357 },{ 00:26:33.357 "params": { 00:26:33.357 "name": "Nvme1", 00:26:33.357 "trtype": "tcp", 00:26:33.357 "traddr": "10.0.0.2", 00:26:33.357 "adrfam": "ipv4", 00:26:33.357 "trsvcid": "4420", 00:26:33.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:33.357 "hdgst": false, 00:26:33.357 "ddgst": false 00:26:33.357 }, 00:26:33.357 "method": "bdev_nvme_attach_controller" 00:26:33.357 }' 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:33.357 01:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.357 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:33.357 ... 00:26:33.357 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:33.357 ... 00:26:33.357 fio-3.35 00:26:33.357 Starting 4 threads 00:26:33.357 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.619 00:26:38.619 filename0: (groupid=0, jobs=1): err= 0: pid=76233: Tue Jul 16 01:19:54 2024 00:26:38.619 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5003msec) 00:26:38.619 slat (nsec): min=3992, max=39729, avg=11948.63, stdev=4579.38 00:26:38.619 clat (usec): min=1575, max=8757, avg=4178.58, stdev=461.58 00:26:38.619 lat (usec): min=1589, max=8774, avg=4190.53, stdev=461.46 00:26:38.619 clat percentiles (usec): 00:26:38.619 | 1.00th=[ 2606], 5.00th=[ 3458], 10.00th=[ 3687], 20.00th=[ 3851], 00:26:38.619 | 30.00th=[ 4015], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:26:38.619 | 70.00th=[ 4359], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4883], 00:26:38.619 | 99.00th=[ 5276], 99.50th=[ 5997], 99.90th=[ 6718], 99.95th=[ 7635], 00:26:38.619 | 99.99th=[ 8717] 00:26:38.619 bw ( KiB/s): min=14592, max=15712, per=26.05%, avg=15158.40, stdev=319.93, samples=10 00:26:38.619 iops : min= 1824, max= 1964, avg=1894.80, stdev=39.99, samples=10 00:26:38.619 lat (msec) : 2=0.04%, 4=28.91%, 10=71.05% 00:26:38.619 cpu : usr=90.44%, sys=7.64%, ctx=214, majf=0, minf=0 00:26:38.619 IO depths : 1=0.7%, 2=13.0%, 4=58.6%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 issued rwts: total=9482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.619 filename0: (groupid=0, jobs=1): err= 0: pid=76234: Tue Jul 16 01:19:54 2024 00:26:38.619 read: IOPS=1760, BW=13.8MiB/s (14.4MB/s)(68.8MiB/5002msec) 00:26:38.619 slat (nsec): min=3826, max=37347, avg=13382.29, stdev=3929.55 00:26:38.619 clat (usec): min=902, max=9148, avg=4492.13, stdev=664.89 00:26:38.619 lat (usec): min=915, max=9160, avg=4505.51, stdev=664.50 00:26:38.619 clat percentiles (usec): 00:26:38.619 | 1.00th=[ 3326], 5.00th=[ 3785], 10.00th=[ 3949], 20.00th=[ 4228], 00:26:38.619 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:26:38.619 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5342], 95.00th=[ 5932], 00:26:38.619 | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 7832], 99.95th=[ 7832], 00:26:38.619 | 99.99th=[ 9110] 00:26:38.619 bw ( KiB/s): min=13392, max=14576, per=24.21%, avg=14088.00, stdev=460.17, samples=10 00:26:38.619 iops : min= 1674, max= 1822, avg=1761.00, stdev=57.52, samples=10 00:26:38.619 lat (usec) : 1000=0.02% 00:26:38.619 lat (msec) : 2=0.09%, 4=11.67%, 10=88.22% 00:26:38.619 cpu : usr=91.40%, sys=6.56%, ctx=212, majf=0, minf=9 00:26:38.619 IO depths : 1=0.1%, 2=17.5%, 4=55.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 issued rwts: total=8808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.619 filename1: (groupid=0, jobs=1): err= 0: pid=76235: Tue Jul 16 01:19:54 2024 00:26:38.619 read: IOPS=1833, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5002msec) 00:26:38.619 slat (nsec): min=3839, max=33946, avg=12573.69, stdev=3744.11 00:26:38.619 clat (usec): min=1148, max=7978, avg=4314.37, stdev=541.86 00:26:38.619 lat (usec): min=1162, max=7992, avg=4326.95, stdev=541.69 00:26:38.619 clat percentiles (usec): 00:26:38.619 | 1.00th=[ 2933], 5.00th=[ 3621], 10.00th=[ 3818], 20.00th=[ 4015], 00:26:38.619 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:38.619 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5342], 00:26:38.619 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7832], 00:26:38.619 | 99.99th=[ 7963] 00:26:38.619 bw ( KiB/s): min=14320, max=15104, per=25.20%, avg=14666.90, stdev=260.65, samples=10 00:26:38.619 iops : min= 1790, max= 1888, avg=1833.30, stdev=32.59, samples=10 00:26:38.619 lat (msec) : 2=0.22%, 4=18.55%, 10=81.23% 00:26:38.619 cpu : usr=94.10%, sys=5.30%, ctx=9, majf=0, minf=0 00:26:38.619 IO depths : 1=0.1%, 2=20.6%, 4=52.8%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 issued rwts: total=9173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.619 filename1: (groupid=0, jobs=1): err= 0: pid=76236: Tue Jul 16 01:19:54 2024 00:26:38.619 read: IOPS=1784, BW=13.9MiB/s (14.6MB/s)(69.8MiB/5002msec) 00:26:38.619 slat (nsec): min=3964, max=34926, avg=13972.77, stdev=3814.38 00:26:38.619 clat (usec): min=929, max=8475, avg=4428.31, stdev=629.59 00:26:38.619 lat (usec): min=943, max=8487, avg=4442.28, stdev=629.18 00:26:38.619 clat percentiles (usec): 00:26:38.619 | 1.00th=[ 3064], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 4178], 00:26:38.619 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:38.619 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5145], 95.00th=[ 5604], 00:26:38.619 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7767], 99.95th=[ 7832], 00:26:38.619 | 99.99th=[ 8455] 00:26:38.619 bw ( KiB/s): min=13200, max=14880, per=24.53%, avg=14273.60, stdev=480.09, samples=10 00:26:38.619 iops : min= 1650, max= 1860, avg=1784.20, stdev=60.01, samples=10 00:26:38.619 lat (usec) : 1000=0.02% 00:26:38.619 lat (msec) : 2=0.20%, 4=13.28%, 10=86.49% 00:26:38.619 cpu : usr=93.54%, sys=5.62%, ctx=11, majf=0, minf=9 00:26:38.619 IO depths : 1=0.1%, 2=18.3%, 4=54.9%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.619 issued rwts: total=8928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.620 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.620 00:26:38.620 Run status group 0 (all jobs): 00:26:38.620 READ: bw=56.8MiB/s (59.6MB/s), 13.8MiB/s-14.8MiB/s (14.4MB/s-15.5MB/s), io=284MiB (298MB), run=5002-5003msec 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 00:26:38.620 real 0m24.387s 00:26:38.620 user 4m34.754s 00:26:38.620 sys 0m6.990s 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 ************************************ 00:26:38.620 END TEST fio_dif_rand_params 00:26:38.620 ************************************ 00:26:38.620 01:19:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:38.620 01:19:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:38.620 01:19:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:38.620 01:19:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 ************************************ 00:26:38.620 START TEST fio_dif_digest 00:26:38.620 ************************************ 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 bdev_null0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.620 [2024-07-16 01:19:54.535750] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.620 { 00:26:38.620 "params": { 00:26:38.620 "name": "Nvme$subsystem", 00:26:38.620 "trtype": "$TEST_TRANSPORT", 00:26:38.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.620 "adrfam": "ipv4", 00:26:38.620 "trsvcid": "$NVMF_PORT", 00:26:38.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.620 "hdgst": ${hdgst:-false}, 00:26:38.620 "ddgst": ${ddgst:-false} 00:26:38.620 }, 00:26:38.620 "method": "bdev_nvme_attach_controller" 00:26:38.620 } 00:26:38.620 EOF 00:26:38.620 )") 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.620 "params": { 00:26:38.620 "name": "Nvme0", 00:26:38.620 "trtype": "tcp", 00:26:38.620 "traddr": "10.0.0.2", 00:26:38.620 "adrfam": "ipv4", 00:26:38.620 "trsvcid": "4420", 00:26:38.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:38.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:38.620 "hdgst": true, 00:26:38.620 "ddgst": true 00:26:38.620 }, 00:26:38.620 "method": "bdev_nvme_attach_controller" 00:26:38.620 }' 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:38.620 01:19:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.878 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:38.878 ... 00:26:38.878 fio-3.35 00:26:38.878 Starting 3 threads 00:26:38.878 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.071 00:26:51.071 filename0: (groupid=0, jobs=1): err= 0: pid=76993: Tue Jul 16 01:20:05 2024 00:26:51.071 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(250MiB/10046msec) 00:26:51.071 slat (nsec): min=5044, max=91729, avg=19884.94, stdev=6262.90 00:26:51.071 clat (usec): min=8844, max=59784, avg=15030.82, stdev=2479.28 00:26:51.071 lat (usec): min=8864, max=59796, avg=15050.71, stdev=2479.19 00:26:51.071 clat percentiles (usec): 00:26:51.071 | 1.00th=[ 9634], 5.00th=[13173], 10.00th=[13698], 20.00th=[14222], 00:26:51.071 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:26:51.071 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:26:51.071 | 99.00th=[17695], 99.50th=[18220], 99.90th=[60031], 99.95th=[60031], 00:26:51.071 | 99.99th=[60031] 00:26:51.071 bw ( KiB/s): min=22829, max=26880, per=34.06%, avg=25563.85, stdev=819.63, samples=20 00:26:51.071 iops : min= 178, max= 210, avg=199.70, stdev= 6.47, samples=20 00:26:51.071 lat (msec) : 10=2.05%, 20=97.70%, 100=0.25% 00:26:51.071 cpu : usr=91.49%, sys=7.53%, ctx=355, majf=0, minf=198 00:26:51.071 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.071 issued rwts: total=1999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.071 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.071 filename0: (groupid=0, jobs=1): err= 0: pid=76994: Tue Jul 16 01:20:05 2024 00:26:51.071 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10046msec) 00:26:51.071 slat (nsec): min=4246, max=51947, avg=15666.25, stdev=4443.59 00:26:51.071 clat (usec): min=8246, max=48984, avg=15312.63, stdev=1665.89 00:26:51.071 lat (usec): min=8260, max=49003, avg=15328.30, stdev=1665.90 00:26:51.071 clat percentiles (usec): 00:26:51.071 | 1.00th=[ 9896], 5.00th=[13435], 10.00th=[14091], 20.00th=[14615], 00:26:51.071 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:26:51.071 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:26:51.071 | 99.00th=[17957], 99.50th=[18482], 99.90th=[46400], 99.95th=[49021], 00:26:51.071 | 99.99th=[49021] 00:26:51.071 bw ( KiB/s): min=24064, max=26368, per=33.43%, avg=25090.50, stdev=725.09, samples=20 00:26:51.071 iops : min= 188, max= 206, avg=196.00, stdev= 5.66, samples=20 00:26:51.071 lat (msec) : 10=1.02%, 20=98.88%, 50=0.10% 00:26:51.071 cpu : usr=92.17%, sys=7.34%, ctx=45, majf=0, minf=137 00:26:51.071 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.071 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.071 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.071 filename0: (groupid=0, jobs=1): err= 0: pid=76995: Tue Jul 16 01:20:05 2024 00:26:51.071 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(241MiB/10045msec) 00:26:51.071 slat (nsec): min=4149, max=52431, avg=15994.74, stdev=5618.33 00:26:51.071 clat (usec): min=9002, max=56352, avg=15588.18, stdev=3826.21 00:26:51.071 lat (usec): min=9014, max=56365, avg=15604.17, stdev=3826.07 00:26:51.071 clat percentiles (usec): 00:26:51.071 | 1.00th=[12649], 5.00th=[13566], 10.00th=[14091], 20.00th=[14484], 00:26:51.071 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:26:51.071 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[16909], 00:26:51.071 | 99.00th=[18220], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:26:51.071 | 99.99th=[56361] 00:26:51.071 bw ( KiB/s): min=22784, max=25856, per=32.85%, avg=24652.80, stdev=1060.73, samples=20 00:26:51.071 iops : min= 178, max= 202, avg=192.60, stdev= 8.29, samples=20 00:26:51.071 lat (msec) : 10=0.05%, 20=99.07%, 100=0.88% 00:26:51.071 cpu : usr=91.88%, sys=7.65%, ctx=25, majf=0, minf=109 00:26:51.071 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.071 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.071 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.071 00:26:51.071 Run status group 0 (all jobs): 00:26:51.071 READ: bw=73.3MiB/s (76.8MB/s), 24.0MiB/s-24.9MiB/s (25.2MB/s-26.1MB/s), io=736MiB (772MB), run=10045-10046msec 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.071 00:26:51.071 real 0m11.126s 00:26:51.071 user 0m28.800s 00:26:51.071 sys 0m2.501s 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.071 01:20:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.071 ************************************ 00:26:51.071 END TEST fio_dif_digest 00:26:51.071 ************************************ 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:51.071 01:20:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:51.071 01:20:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.071 rmmod nvme_tcp 00:26:51.071 rmmod nvme_fabrics 00:26:51.071 rmmod nvme_keyring 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 70915 ']' 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 70915 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 70915 ']' 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 70915 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70915 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70915' 00:26:51.071 killing process with pid 70915 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@967 -- # kill 70915 00:26:51.071 01:20:05 nvmf_dif -- common/autotest_common.sh@972 -- # wait 70915 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:51.071 01:20:05 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:51.330 Waiting for block devices as requested 00:26:51.330 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:51.330 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:51.589 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:51.589 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:51.589 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:51.589 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:51.847 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:51.847 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:51.847 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:52.107 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:52.107 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:52.107 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:52.108 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:52.367 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:52.367 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:52.367 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:52.627 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:52.627 01:20:08 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.627 01:20:08 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.627 01:20:08 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.627 01:20:08 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.627 01:20:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.627 01:20:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:52.627 01:20:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.168 01:20:10 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:55.168 00:26:55.168 real 1m7.350s 00:26:55.168 user 6m30.785s 00:26:55.168 sys 0m19.420s 00:26:55.168 01:20:10 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:55.168 01:20:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:55.168 ************************************ 00:26:55.168 END TEST nvmf_dif 00:26:55.168 ************************************ 00:26:55.168 01:20:10 -- common/autotest_common.sh@1142 -- # return 0 00:26:55.168 01:20:10 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:55.168 01:20:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:55.168 01:20:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.168 01:20:10 -- common/autotest_common.sh@10 -- # set +x 00:26:55.168 ************************************ 00:26:55.168 START TEST nvmf_abort_qd_sizes 00:26:55.168 ************************************ 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:55.168 * Looking for test storage... 00:26:55.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.168 01:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:57.079 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:57.079 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:57.079 Found net devices under 0000:09:00.0: cvl_0_0 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.079 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:57.080 Found net devices under 0000:09:00.1: cvl_0_1 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:26:57.080 00:26:57.080 --- 10.0.0.2 ping statistics --- 00:26:57.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.080 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:26:57.080 00:26:57.080 --- 10.0.0.1 ping statistics --- 00:26:57.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.080 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:57.080 01:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:58.015 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:58.015 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:58.015 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:58.015 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:58.015 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:58.015 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:58.015 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:58.015 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:58.015 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:58.015 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:58.015 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:58.015 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:58.015 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:58.015 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:58.015 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:58.275 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:59.211 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=81906 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 81906 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 81906 ']' 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.211 01:20:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.211 [2024-07-16 01:20:15.142899] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:26:59.211 [2024-07-16 01:20:15.143011] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.468 [2024-07-16 01:20:15.210081] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.468 [2024-07-16 01:20:15.318895] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.468 [2024-07-16 01:20:15.318961] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.468 [2024-07-16 01:20:15.318976] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.468 [2024-07-16 01:20:15.318987] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.468 [2024-07-16 01:20:15.318996] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.468 [2024-07-16 01:20:15.319088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.468 [2024-07-16 01:20:15.319152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.468 [2024-07-16 01:20:15.319218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.468 [2024-07-16 01:20:15.319221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:00.401 01:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:00.401 ************************************ 00:27:00.401 START TEST spdk_target_abort 00:27:00.401 ************************************ 00:27:00.401 01:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:00.401 01:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:00.401 01:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:27:00.401 01:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.401 01:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.678 spdk_targetn1 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.678 [2024-07-16 01:20:18.968618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.678 01:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.678 [2024-07-16 01:20:19.000873] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:03.678 01:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.678 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.196 Initializing NVMe Controllers 00:27:06.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:06.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:06.196 Initialization complete. Launching workers. 00:27:06.196 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12829, failed: 0 00:27:06.196 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 11574 00:27:06.196 success 748, unsuccess 507, failed 0 00:27:06.196 01:20:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:06.196 01:20:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:06.196 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.492 Initializing NVMe Controllers 00:27:09.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:09.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:09.492 Initialization complete. Launching workers. 00:27:09.492 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8649, failed: 0 00:27:09.492 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1244, failed to submit 7405 00:27:09.492 success 318, unsuccess 926, failed 0 00:27:09.492 01:20:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:09.492 01:20:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:09.492 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.795 Initializing NVMe Controllers 00:27:12.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:12.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:12.795 Initialization complete. Launching workers. 00:27:12.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31065, failed: 0 00:27:12.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2819, failed to submit 28246 00:27:12.795 success 523, unsuccess 2296, failed 0 00:27:12.795 01:20:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:12.795 01:20:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.795 01:20:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.795 01:20:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.795 01:20:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:12.795 01:20:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.795 01:20:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 81906 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 81906 ']' 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 81906 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81906 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81906' 00:27:14.161 killing process with pid 81906 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 81906 00:27:14.161 01:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 81906 00:27:14.418 00:27:14.418 real 0m14.107s 00:27:14.418 user 0m55.878s 00:27:14.418 sys 0m2.574s 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:14.418 ************************************ 00:27:14.418 END TEST spdk_target_abort 00:27:14.418 ************************************ 00:27:14.418 01:20:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:14.418 01:20:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:14.418 01:20:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:14.418 01:20:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.418 01:20:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:14.418 ************************************ 00:27:14.418 START TEST kernel_target_abort 00:27:14.418 ************************************ 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:14.418 01:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:15.792 Waiting for block devices as requested 00:27:15.792 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:15.792 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:15.792 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:16.050 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:16.050 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:16.050 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:16.050 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:16.308 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:16.308 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:16.308 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:16.565 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:16.565 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:16.565 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:16.565 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:16.823 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:16.823 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:16.823 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:17.082 No valid GPT data, bailing 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:17.082 01:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:17.082 00:27:17.082 Discovery Log Number of Records 2, Generation counter 2 00:27:17.082 =====Discovery Log Entry 0====== 00:27:17.082 trtype: tcp 00:27:17.082 adrfam: ipv4 00:27:17.082 subtype: current discovery subsystem 00:27:17.082 treq: not specified, sq flow control disable supported 00:27:17.082 portid: 1 00:27:17.082 trsvcid: 4420 00:27:17.082 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:17.082 traddr: 10.0.0.1 00:27:17.082 eflags: none 00:27:17.082 sectype: none 00:27:17.082 =====Discovery Log Entry 1====== 00:27:17.082 trtype: tcp 00:27:17.082 adrfam: ipv4 00:27:17.082 subtype: nvme subsystem 00:27:17.082 treq: not specified, sq flow control disable supported 00:27:17.082 portid: 1 00:27:17.082 trsvcid: 4420 00:27:17.082 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:17.082 traddr: 10.0.0.1 00:27:17.082 eflags: none 00:27:17.082 sectype: none 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:17.082 01:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:17.394 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.677 Initializing NVMe Controllers 00:27:20.677 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:20.677 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:20.677 Initialization complete. Launching workers. 00:27:20.677 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50627, failed: 0 00:27:20.677 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50627, failed to submit 0 00:27:20.677 success 0, unsuccess 50627, failed 0 00:27:20.677 01:20:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:20.677 01:20:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:20.677 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.953 Initializing NVMe Controllers 00:27:23.953 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.953 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:23.953 Initialization complete. Launching workers. 00:27:23.953 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94089, failed: 0 00:27:23.953 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23750, failed to submit 70339 00:27:23.953 success 0, unsuccess 23750, failed 0 00:27:23.953 01:20:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:23.953 01:20:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.953 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.478 Initializing NVMe Controllers 00:27:26.478 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:26.478 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:26.478 Initialization complete. Launching workers. 00:27:26.478 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91274, failed: 0 00:27:26.478 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22810, failed to submit 68464 00:27:26.478 success 0, unsuccess 22810, failed 0 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:26.478 01:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:27.853 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:27.853 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:27.853 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:27.853 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:27.853 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:27.853 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:27.853 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:27.853 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:27.853 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:28.788 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:29.047 00:27:29.047 real 0m14.611s 00:27:29.047 user 0m6.210s 00:27:29.047 sys 0m3.447s 00:27:29.047 01:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:29.047 01:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.047 ************************************ 00:27:29.047 END TEST kernel_target_abort 00:27:29.047 ************************************ 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.047 rmmod nvme_tcp 00:27:29.047 rmmod nvme_fabrics 00:27:29.047 rmmod nvme_keyring 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 81906 ']' 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 81906 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 81906 ']' 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 81906 00:27:29.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (81906) - No such process 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 81906 is not found' 00:27:29.047 Process with pid 81906 is not found 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:29.047 01:20:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:30.419 Waiting for block devices as requested 00:27:30.419 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:30.419 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:30.419 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:30.419 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:30.677 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:30.677 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:30.677 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:30.677 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:30.935 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:30.935 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:30.935 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:31.194 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:31.194 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:31.194 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:31.194 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:31.451 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:31.451 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:31.451 01:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.451 01:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.451 01:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.451 01:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.451 01:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.451 01:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.451 01:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.982 01:20:49 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.982 00:27:33.982 real 0m38.850s 00:27:33.982 user 1m4.343s 00:27:33.982 sys 0m9.487s 00:27:33.982 01:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.982 01:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:33.982 ************************************ 00:27:33.982 END TEST nvmf_abort_qd_sizes 00:27:33.982 ************************************ 00:27:33.982 01:20:49 -- common/autotest_common.sh@1142 -- # return 0 00:27:33.982 01:20:49 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:33.982 01:20:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:33.982 01:20:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.982 01:20:49 -- common/autotest_common.sh@10 -- # set +x 00:27:33.982 ************************************ 00:27:33.982 START TEST keyring_file 00:27:33.982 ************************************ 00:27:33.982 01:20:49 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:33.982 * Looking for test storage... 00:27:33.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:33.982 01:20:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:33.982 01:20:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.982 01:20:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:33.982 01:20:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.982 01:20:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.983 01:20:49 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.983 01:20:49 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.983 01:20:49 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.983 01:20:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.983 01:20:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.983 01:20:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.983 01:20:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:33.983 01:20:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hlQaCljAwh 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hlQaCljAwh 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hlQaCljAwh 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.hlQaCljAwh 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VjNBfGbfRS 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:33.983 01:20:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VjNBfGbfRS 00:27:33.983 01:20:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VjNBfGbfRS 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VjNBfGbfRS 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=87790 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:33.983 01:20:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 87790 00:27:33.983 01:20:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 87790 ']' 00:27:33.983 01:20:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.983 01:20:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.983 01:20:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.983 01:20:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.983 01:20:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:33.983 [2024-07-16 01:20:49.691069] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:27:33.983 [2024-07-16 01:20:49.691175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87790 ] 00:27:33.983 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.983 [2024-07-16 01:20:49.747713] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.983 [2024-07-16 01:20:49.854516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:34.241 01:20:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:34.241 [2024-07-16 01:20:50.110453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.241 null0 00:27:34.241 [2024-07-16 01:20:50.142513] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:34.241 [2024-07-16 01:20:50.143013] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:34.241 [2024-07-16 01:20:50.150518] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.241 01:20:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.241 01:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:34.241 [2024-07-16 01:20:50.158529] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:34.241 request: 00:27:34.241 { 00:27:34.241 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.241 "secure_channel": false, 00:27:34.241 "listen_address": { 00:27:34.242 "trtype": "tcp", 00:27:34.242 "traddr": "127.0.0.1", 00:27:34.242 "trsvcid": "4420" 00:27:34.242 }, 00:27:34.242 "method": "nvmf_subsystem_add_listener", 00:27:34.242 "req_id": 1 00:27:34.242 } 00:27:34.242 Got JSON-RPC error response 00:27:34.242 response: 00:27:34.242 { 00:27:34.242 "code": -32602, 00:27:34.242 "message": "Invalid parameters" 00:27:34.242 } 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.242 01:20:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=87806 00:27:34.242 01:20:50 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:34.242 01:20:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 87806 /var/tmp/bperf.sock 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 87806 ']' 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.242 01:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:34.242 [2024-07-16 01:20:50.203353] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:27:34.242 [2024-07-16 01:20:50.203439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87806 ] 00:27:34.242 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.500 [2024-07-16 01:20:50.260329] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.500 [2024-07-16 01:20:50.365300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.500 01:20:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.500 01:20:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:34.500 01:20:50 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:34.500 01:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:34.757 01:20:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VjNBfGbfRS 00:27:34.757 01:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VjNBfGbfRS 00:27:35.015 01:20:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:35.015 01:20:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:35.015 01:20:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.015 01:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.015 01:20:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.272 01:20:51 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.hlQaCljAwh == \/\t\m\p\/\t\m\p\.\h\l\Q\a\C\l\j\A\w\h ]] 00:27:35.272 01:20:51 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:35.272 01:20:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:35.272 01:20:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.272 01:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.272 01:20:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:35.530 01:20:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VjNBfGbfRS == \/\t\m\p\/\t\m\p\.\V\j\N\B\f\G\b\f\R\S ]] 00:27:35.530 01:20:51 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:35.530 01:20:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:35.530 01:20:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.530 01:20:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.530 01:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.530 01:20:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.787 01:20:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:35.787 01:20:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:35.787 01:20:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:35.787 01:20:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.787 01:20:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.787 01:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.787 01:20:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:36.044 01:20:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:36.044 01:20:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.044 01:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.302 [2024-07-16 01:20:52.209584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:36.302 nvme0n1 00:27:36.559 01:20:52 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.559 01:20:52 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:36.559 01:20:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.559 01:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:36.816 01:20:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:36.816 01:20:52 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.074 Running I/O for 1 seconds... 00:27:38.006 00:27:38.006 Latency(us) 00:27:38.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.006 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:38.006 nvme0n1 : 1.01 8628.66 33.71 0.00 0.00 14764.12 7233.23 23398.78 00:27:38.006 =================================================================================================================== 00:27:38.006 Total : 8628.66 33.71 0.00 0.00 14764.12 7233.23 23398.78 00:27:38.006 0 00:27:38.006 01:20:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:38.006 01:20:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:38.262 01:20:54 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:38.262 01:20:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:38.262 01:20:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.262 01:20:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.262 01:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.262 01:20:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.548 01:20:54 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:38.548 01:20:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:38.548 01:20:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:38.548 01:20:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.548 01:20:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.548 01:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.548 01:20:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:38.805 01:20:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:38.805 01:20:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:38.805 01:20:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:38.805 01:20:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:38.805 01:20:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:38.805 01:20:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.805 01:20:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:38.805 01:20:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.805 01:20:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:38.805 01:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:39.117 [2024-07-16 01:20:54.913018] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:39.117 [2024-07-16 01:20:54.913662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224ca30 (107): Transport endpoint is not connected 00:27:39.117 [2024-07-16 01:20:54.914655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224ca30 (9): Bad file descriptor 00:27:39.117 [2024-07-16 01:20:54.915662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.117 [2024-07-16 01:20:54.915681] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:39.117 [2024-07-16 01:20:54.915710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.117 request: 00:27:39.117 { 00:27:39.117 "name": "nvme0", 00:27:39.117 "trtype": "tcp", 00:27:39.117 "traddr": "127.0.0.1", 00:27:39.117 "adrfam": "ipv4", 00:27:39.117 "trsvcid": "4420", 00:27:39.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.117 "prchk_reftag": false, 00:27:39.117 "prchk_guard": false, 00:27:39.117 "hdgst": false, 00:27:39.117 "ddgst": false, 00:27:39.117 "psk": "key1", 00:27:39.117 "method": "bdev_nvme_attach_controller", 00:27:39.117 "req_id": 1 00:27:39.117 } 00:27:39.117 Got JSON-RPC error response 00:27:39.117 response: 00:27:39.117 { 00:27:39.117 "code": -5, 00:27:39.117 "message": "Input/output error" 00:27:39.117 } 00:27:39.117 01:20:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:39.117 01:20:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.117 01:20:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.117 01:20:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.117 01:20:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:39.117 01:20:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:39.117 01:20:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.117 01:20:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.117 01:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.117 01:20:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:39.373 01:20:55 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:39.373 01:20:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:39.373 01:20:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:39.373 01:20:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.373 01:20:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.373 01:20:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:39.373 01:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.629 01:20:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:39.629 01:20:55 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:39.629 01:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:39.886 01:20:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:39.886 01:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:40.141 01:20:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:40.141 01:20:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:40.141 01:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.397 01:20:56 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:40.397 01:20:56 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.hlQaCljAwh 00:27:40.397 01:20:56 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:40.397 01:20:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:40.397 01:20:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:40.397 01:20:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:40.397 01:20:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.397 01:20:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:40.397 01:20:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.397 01:20:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:40.397 01:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:40.653 [2024-07-16 01:20:56.399310] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hlQaCljAwh': 0100660 00:27:40.653 [2024-07-16 01:20:56.399343] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:40.653 request: 00:27:40.653 { 00:27:40.653 "name": "key0", 00:27:40.653 "path": "/tmp/tmp.hlQaCljAwh", 00:27:40.653 "method": "keyring_file_add_key", 00:27:40.653 "req_id": 1 00:27:40.653 } 00:27:40.653 Got JSON-RPC error response 00:27:40.653 response: 00:27:40.653 { 00:27:40.653 "code": -1, 00:27:40.653 "message": "Operation not permitted" 00:27:40.653 } 00:27:40.653 01:20:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:40.653 01:20:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.653 01:20:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.653 01:20:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.653 01:20:56 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.hlQaCljAwh 00:27:40.653 01:20:56 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:40.653 01:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hlQaCljAwh 00:27:40.910 01:20:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.hlQaCljAwh 00:27:40.910 01:20:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:40.910 01:20:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:40.910 01:20:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.910 01:20:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.910 01:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.910 01:20:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.167 01:20:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:41.167 01:20:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.167 01:20:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:41.167 01:20:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.167 01:20:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:41.167 01:20:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.167 01:20:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:41.167 01:20:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.167 01:20:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.167 01:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.167 [2024-07-16 01:20:57.157372] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.hlQaCljAwh': No such file or directory 00:27:41.167 [2024-07-16 01:20:57.157404] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:41.167 [2024-07-16 01:20:57.157444] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:41.167 [2024-07-16 01:20:57.157455] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:41.167 [2024-07-16 01:20:57.157466] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:41.424 request: 00:27:41.424 { 00:27:41.424 "name": "nvme0", 00:27:41.424 "trtype": "tcp", 00:27:41.424 "traddr": "127.0.0.1", 00:27:41.424 "adrfam": "ipv4", 00:27:41.424 "trsvcid": "4420", 00:27:41.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:41.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:41.424 "prchk_reftag": false, 00:27:41.424 "prchk_guard": false, 00:27:41.424 "hdgst": false, 00:27:41.424 "ddgst": false, 00:27:41.424 "psk": "key0", 00:27:41.424 "method": "bdev_nvme_attach_controller", 00:27:41.424 "req_id": 1 00:27:41.424 } 00:27:41.424 Got JSON-RPC error response 00:27:41.424 response: 00:27:41.424 { 00:27:41.424 "code": -19, 00:27:41.424 "message": "No such device" 00:27:41.424 } 00:27:41.424 01:20:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:41.424 01:20:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:41.424 01:20:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:41.424 01:20:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:41.424 01:20:57 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:41.424 01:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:41.681 01:20:57 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hdKyi9AnGg 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:41.681 01:20:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:41.681 01:20:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.681 01:20:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.681 01:20:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:41.681 01:20:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:41.681 01:20:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hdKyi9AnGg 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hdKyi9AnGg 00:27:41.681 01:20:57 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.hdKyi9AnGg 00:27:41.681 01:20:57 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hdKyi9AnGg 00:27:41.681 01:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hdKyi9AnGg 00:27:41.939 01:20:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.939 01:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:42.196 nvme0n1 00:27:42.196 01:20:58 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:42.196 01:20:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:42.196 01:20:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:42.196 01:20:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:42.196 01:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:42.196 01:20:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:42.453 01:20:58 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:42.453 01:20:58 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:42.453 01:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:42.710 01:20:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:42.710 01:20:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:42.710 01:20:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:42.710 01:20:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:42.710 01:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:42.967 01:20:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:42.967 01:20:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:42.967 01:20:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:42.967 01:20:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:42.967 01:20:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:42.967 01:20:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:42.967 01:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.224 01:20:59 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:43.224 01:20:59 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:43.224 01:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:43.481 01:20:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:43.482 01:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.482 01:20:59 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:43.738 01:20:59 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:43.738 01:20:59 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hdKyi9AnGg 00:27:43.738 01:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hdKyi9AnGg 00:27:43.995 01:20:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VjNBfGbfRS 00:27:43.995 01:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VjNBfGbfRS 00:27:44.253 01:20:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:44.253 01:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:44.510 nvme0n1 00:27:44.510 01:21:00 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:44.510 01:21:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:44.767 01:21:00 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:44.767 "subsystems": [ 00:27:44.767 { 00:27:44.767 "subsystem": "keyring", 00:27:44.767 "config": [ 00:27:44.767 { 00:27:44.767 "method": "keyring_file_add_key", 00:27:44.767 "params": { 00:27:44.767 "name": "key0", 00:27:44.767 "path": "/tmp/tmp.hdKyi9AnGg" 00:27:44.767 } 00:27:44.767 }, 00:27:44.767 { 00:27:44.767 "method": "keyring_file_add_key", 00:27:44.767 "params": { 00:27:44.767 "name": "key1", 00:27:44.767 "path": "/tmp/tmp.VjNBfGbfRS" 00:27:44.767 } 00:27:44.767 } 00:27:44.767 ] 00:27:44.767 }, 00:27:44.767 { 00:27:44.767 "subsystem": "iobuf", 00:27:44.767 "config": [ 00:27:44.767 { 00:27:44.767 "method": "iobuf_set_options", 00:27:44.767 "params": { 00:27:44.767 "small_pool_count": 8192, 00:27:44.767 "large_pool_count": 1024, 00:27:44.767 "small_bufsize": 8192, 00:27:44.767 "large_bufsize": 135168 00:27:44.767 } 00:27:44.767 } 00:27:44.767 ] 00:27:44.767 }, 00:27:44.767 { 00:27:44.767 "subsystem": "sock", 00:27:44.767 "config": [ 00:27:44.767 { 00:27:44.767 "method": "sock_set_default_impl", 00:27:44.767 "params": { 00:27:44.767 "impl_name": "posix" 00:27:44.767 } 00:27:44.767 }, 00:27:44.767 { 00:27:44.767 "method": "sock_impl_set_options", 00:27:44.767 "params": { 00:27:44.767 "impl_name": "ssl", 00:27:44.767 "recv_buf_size": 4096, 00:27:44.767 "send_buf_size": 4096, 00:27:44.767 "enable_recv_pipe": true, 00:27:44.767 "enable_quickack": false, 00:27:44.767 "enable_placement_id": 0, 00:27:44.767 "enable_zerocopy_send_server": true, 00:27:44.767 "enable_zerocopy_send_client": false, 00:27:44.767 "zerocopy_threshold": 0, 00:27:44.768 "tls_version": 0, 00:27:44.768 "enable_ktls": false 00:27:44.768 } 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "method": "sock_impl_set_options", 00:27:44.768 "params": { 00:27:44.768 "impl_name": "posix", 00:27:44.768 "recv_buf_size": 2097152, 00:27:44.768 "send_buf_size": 2097152, 00:27:44.768 "enable_recv_pipe": true, 00:27:44.768 "enable_quickack": false, 00:27:44.768 "enable_placement_id": 0, 00:27:44.768 "enable_zerocopy_send_server": true, 00:27:44.768 "enable_zerocopy_send_client": false, 00:27:44.768 "zerocopy_threshold": 0, 00:27:44.768 "tls_version": 0, 00:27:44.768 "enable_ktls": false 00:27:44.768 } 00:27:44.768 } 00:27:44.768 ] 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "subsystem": "vmd", 00:27:44.768 "config": [] 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "subsystem": "accel", 00:27:44.768 "config": [ 00:27:44.768 { 00:27:44.768 "method": "accel_set_options", 00:27:44.768 "params": { 00:27:44.768 "small_cache_size": 128, 00:27:44.768 "large_cache_size": 16, 00:27:44.768 "task_count": 2048, 00:27:44.768 "sequence_count": 2048, 00:27:44.768 "buf_count": 2048 00:27:44.768 } 00:27:44.768 } 00:27:44.768 ] 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "subsystem": "bdev", 00:27:44.768 "config": [ 00:27:44.768 { 00:27:44.768 "method": "bdev_set_options", 00:27:44.768 "params": { 00:27:44.768 "bdev_io_pool_size": 65535, 00:27:44.768 "bdev_io_cache_size": 256, 00:27:44.768 "bdev_auto_examine": true, 00:27:44.768 "iobuf_small_cache_size": 128, 00:27:44.768 "iobuf_large_cache_size": 16 00:27:44.768 } 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "method": "bdev_raid_set_options", 00:27:44.768 "params": { 00:27:44.768 "process_window_size_kb": 1024 00:27:44.768 } 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "method": "bdev_iscsi_set_options", 00:27:44.768 "params": { 00:27:44.768 "timeout_sec": 30 00:27:44.768 } 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "method": "bdev_nvme_set_options", 00:27:44.768 "params": { 00:27:44.768 "action_on_timeout": "none", 00:27:44.768 "timeout_us": 0, 00:27:44.768 "timeout_admin_us": 0, 00:27:44.768 "keep_alive_timeout_ms": 10000, 00:27:44.768 "arbitration_burst": 0, 00:27:44.768 "low_priority_weight": 0, 00:27:44.768 "medium_priority_weight": 0, 00:27:44.768 "high_priority_weight": 0, 00:27:44.768 "nvme_adminq_poll_period_us": 10000, 00:27:44.768 "nvme_ioq_poll_period_us": 0, 00:27:44.768 "io_queue_requests": 512, 00:27:44.768 "delay_cmd_submit": true, 00:27:44.768 "transport_retry_count": 4, 00:27:44.768 "bdev_retry_count": 3, 00:27:44.768 "transport_ack_timeout": 0, 00:27:44.768 "ctrlr_loss_timeout_sec": 0, 00:27:44.768 "reconnect_delay_sec": 0, 00:27:44.768 "fast_io_fail_timeout_sec": 0, 00:27:44.768 "disable_auto_failback": false, 00:27:44.768 "generate_uuids": false, 00:27:44.768 "transport_tos": 0, 00:27:44.768 "nvme_error_stat": false, 00:27:44.768 "rdma_srq_size": 0, 00:27:44.768 "io_path_stat": false, 00:27:44.768 "allow_accel_sequence": false, 00:27:44.768 "rdma_max_cq_size": 0, 00:27:44.768 "rdma_cm_event_timeout_ms": 0, 00:27:44.768 "dhchap_digests": [ 00:27:44.768 "sha256", 00:27:44.768 "sha384", 00:27:44.768 "sha512" 00:27:44.768 ], 00:27:44.768 "dhchap_dhgroups": [ 00:27:44.768 "null", 00:27:44.768 "ffdhe2048", 00:27:44.768 "ffdhe3072", 00:27:44.768 "ffdhe4096", 00:27:44.768 "ffdhe6144", 00:27:44.768 "ffdhe8192" 00:27:44.768 ] 00:27:44.768 } 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "method": "bdev_nvme_attach_controller", 00:27:44.768 "params": { 00:27:44.768 "name": "nvme0", 00:27:44.768 "trtype": "TCP", 00:27:44.768 "adrfam": "IPv4", 00:27:44.768 "traddr": "127.0.0.1", 00:27:44.768 "trsvcid": "4420", 00:27:44.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.768 "prchk_reftag": false, 00:27:44.768 "prchk_guard": false, 00:27:44.768 "ctrlr_loss_timeout_sec": 0, 00:27:44.768 "reconnect_delay_sec": 0, 00:27:44.768 "fast_io_fail_timeout_sec": 0, 00:27:44.768 "psk": "key0", 00:27:44.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.768 "hdgst": false, 00:27:44.768 "ddgst": false 00:27:44.768 } 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "method": "bdev_nvme_set_hotplug", 00:27:44.768 "params": { 00:27:44.768 "period_us": 100000, 00:27:44.768 "enable": false 00:27:44.768 } 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "method": "bdev_wait_for_examine" 00:27:44.768 } 00:27:44.768 ] 00:27:44.768 }, 00:27:44.768 { 00:27:44.768 "subsystem": "nbd", 00:27:44.768 "config": [] 00:27:44.768 } 00:27:44.768 ] 00:27:44.768 }' 00:27:44.768 01:21:00 keyring_file -- keyring/file.sh@114 -- # killprocess 87806 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 87806 ']' 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 87806 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87806 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87806' 00:27:44.768 killing process with pid 87806 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@967 -- # kill 87806 00:27:44.768 Received shutdown signal, test time was about 1.000000 seconds 00:27:44.768 00:27:44.768 Latency(us) 00:27:44.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.768 =================================================================================================================== 00:27:44.768 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.768 01:21:00 keyring_file -- common/autotest_common.sh@972 -- # wait 87806 00:27:45.026 01:21:00 keyring_file -- keyring/file.sh@117 -- # bperfpid=89317 00:27:45.026 01:21:00 keyring_file -- keyring/file.sh@119 -- # waitforlisten 89317 /var/tmp/bperf.sock 00:27:45.026 01:21:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 89317 ']' 00:27:45.026 01:21:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.026 01:21:00 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:45.026 01:21:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.026 01:21:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.026 01:21:00 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:45.026 "subsystems": [ 00:27:45.026 { 00:27:45.026 "subsystem": "keyring", 00:27:45.026 "config": [ 00:27:45.026 { 00:27:45.026 "method": "keyring_file_add_key", 00:27:45.026 "params": { 00:27:45.026 "name": "key0", 00:27:45.026 "path": "/tmp/tmp.hdKyi9AnGg" 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "keyring_file_add_key", 00:27:45.026 "params": { 00:27:45.026 "name": "key1", 00:27:45.026 "path": "/tmp/tmp.VjNBfGbfRS" 00:27:45.026 } 00:27:45.026 } 00:27:45.026 ] 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "subsystem": "iobuf", 00:27:45.026 "config": [ 00:27:45.026 { 00:27:45.026 "method": "iobuf_set_options", 00:27:45.026 "params": { 00:27:45.026 "small_pool_count": 8192, 00:27:45.026 "large_pool_count": 1024, 00:27:45.026 "small_bufsize": 8192, 00:27:45.026 "large_bufsize": 135168 00:27:45.026 } 00:27:45.026 } 00:27:45.026 ] 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "subsystem": "sock", 00:27:45.026 "config": [ 00:27:45.026 { 00:27:45.026 "method": "sock_set_default_impl", 00:27:45.026 "params": { 00:27:45.026 "impl_name": "posix" 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "sock_impl_set_options", 00:27:45.026 "params": { 00:27:45.026 "impl_name": "ssl", 00:27:45.026 "recv_buf_size": 4096, 00:27:45.026 "send_buf_size": 4096, 00:27:45.026 "enable_recv_pipe": true, 00:27:45.026 "enable_quickack": false, 00:27:45.026 "enable_placement_id": 0, 00:27:45.026 "enable_zerocopy_send_server": true, 00:27:45.026 "enable_zerocopy_send_client": false, 00:27:45.026 "zerocopy_threshold": 0, 00:27:45.026 "tls_version": 0, 00:27:45.026 "enable_ktls": false 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "sock_impl_set_options", 00:27:45.026 "params": { 00:27:45.026 "impl_name": "posix", 00:27:45.026 "recv_buf_size": 2097152, 00:27:45.026 "send_buf_size": 2097152, 00:27:45.026 "enable_recv_pipe": true, 00:27:45.026 "enable_quickack": false, 00:27:45.026 "enable_placement_id": 0, 00:27:45.026 "enable_zerocopy_send_server": true, 00:27:45.026 "enable_zerocopy_send_client": false, 00:27:45.026 "zerocopy_threshold": 0, 00:27:45.026 "tls_version": 0, 00:27:45.026 "enable_ktls": false 00:27:45.026 } 00:27:45.026 } 00:27:45.026 ] 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "subsystem": "vmd", 00:27:45.026 "config": [] 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "subsystem": "accel", 00:27:45.026 "config": [ 00:27:45.026 { 00:27:45.026 "method": "accel_set_options", 00:27:45.026 "params": { 00:27:45.026 "small_cache_size": 128, 00:27:45.026 "large_cache_size": 16, 00:27:45.026 "task_count": 2048, 00:27:45.026 "sequence_count": 2048, 00:27:45.026 "buf_count": 2048 00:27:45.026 } 00:27:45.026 } 00:27:45.026 ] 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "subsystem": "bdev", 00:27:45.026 "config": [ 00:27:45.026 { 00:27:45.026 "method": "bdev_set_options", 00:27:45.026 "params": { 00:27:45.026 "bdev_io_pool_size": 65535, 00:27:45.026 "bdev_io_cache_size": 256, 00:27:45.026 "bdev_auto_examine": true, 00:27:45.026 "iobuf_small_cache_size": 128, 00:27:45.026 "iobuf_large_cache_size": 16 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "bdev_raid_set_options", 00:27:45.026 "params": { 00:27:45.026 "process_window_size_kb": 1024 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "bdev_iscsi_set_options", 00:27:45.026 "params": { 00:27:45.026 "timeout_sec": 30 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "bdev_nvme_set_options", 00:27:45.026 "params": { 00:27:45.026 "action_on_timeout": "none", 00:27:45.026 "timeout_us": 0, 00:27:45.026 "timeout_admin_us": 0, 00:27:45.026 "keep_alive_timeout_ms": 10000, 00:27:45.026 "arbitration_burst": 0, 00:27:45.026 "low_priority_weight": 0, 00:27:45.026 "medium_priority_weight": 0, 00:27:45.026 "high_priority_weight": 0, 00:27:45.026 "nvme_adminq_poll_period_us": 10000, 00:27:45.026 "nvme_ioq_poll_period_us": 0, 00:27:45.026 "io_queue_requests": 512, 00:27:45.026 "delay_cmd_submit": true, 00:27:45.026 "transport_retry_count": 4, 00:27:45.026 "bdev_retry_count": 3, 00:27:45.026 "transport_ack_timeout": 0, 00:27:45.026 "ctrlr_loss_timeout_sec": 0, 00:27:45.026 "reconnect_delay_sec": 0, 00:27:45.026 "fast_io_fail_timeout_sec": 0, 00:27:45.026 "disable_auto_failback": false, 00:27:45.026 "generate_uuids": false, 00:27:45.026 "transport_tos": 0, 00:27:45.026 "nvme_error_stat": false, 00:27:45.026 "rdma_srq_size": 0, 00:27:45.026 "io_path_stat": false, 00:27:45.026 "allow_accel_sequence": false, 00:27:45.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.026 "rdma_max_cq_size": 0, 00:27:45.026 "rdma_cm_event_timeout_ms": 0, 00:27:45.026 "dhchap_digests": [ 00:27:45.026 "sha256", 00:27:45.026 "sha384", 00:27:45.026 "sha512" 00:27:45.026 ], 00:27:45.026 "dhchap_dhgroups": [ 00:27:45.026 "null", 00:27:45.026 "ffdhe2048", 00:27:45.026 "ffdhe3072", 00:27:45.026 "ffdhe4096", 00:27:45.026 "ffdhe6144", 00:27:45.026 "ffdhe8192" 00:27:45.026 ] 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "bdev_nvme_attach_controller", 00:27:45.026 "params": { 00:27:45.026 "name": "nvme0", 00:27:45.026 "trtype": "TCP", 00:27:45.026 "adrfam": "IPv4", 00:27:45.026 "traddr": "127.0.0.1", 00:27:45.026 "trsvcid": "4420", 00:27:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.026 "prchk_reftag": false, 00:27:45.026 "prchk_guard": false, 00:27:45.026 "ctrlr_loss_timeout_sec": 0, 00:27:45.026 "reconnect_delay_sec": 0, 00:27:45.026 "fast_io_fail_timeout_sec": 0, 00:27:45.026 "psk": "key0", 00:27:45.026 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.026 "hdgst": false, 00:27:45.026 "ddgst": false 00:27:45.026 } 00:27:45.026 }, 00:27:45.026 { 00:27:45.026 "method": "bdev_nvme_set_hotplug", 00:27:45.026 "params": { 00:27:45.026 "period_us": 100000, 00:27:45.027 "enable": false 00:27:45.027 } 00:27:45.027 }, 00:27:45.027 { 00:27:45.027 "method": "bdev_wait_for_examine" 00:27:45.027 } 00:27:45.027 ] 00:27:45.027 }, 00:27:45.027 { 00:27:45.027 "subsystem": "nbd", 00:27:45.027 "config": [] 00:27:45.027 } 00:27:45.027 ] 00:27:45.027 }' 00:27:45.027 01:21:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.027 01:21:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:45.027 [2024-07-16 01:21:00.969034] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:27:45.027 [2024-07-16 01:21:00.969120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89317 ] 00:27:45.027 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.285 [2024-07-16 01:21:01.028741] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.285 [2024-07-16 01:21:01.139446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.543 [2024-07-16 01:21:01.321476] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:46.106 01:21:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.107 01:21:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:46.107 01:21:01 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:46.107 01:21:01 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:46.107 01:21:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.363 01:21:02 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:46.363 01:21:02 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:46.363 01:21:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:46.363 01:21:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:46.363 01:21:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.363 01:21:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:46.363 01:21:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.621 01:21:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:46.621 01:21:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:46.621 01:21:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:46.621 01:21:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:46.621 01:21:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.621 01:21:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:46.621 01:21:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.877 01:21:02 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:46.877 01:21:02 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:46.877 01:21:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:46.877 01:21:02 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:47.134 01:21:02 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:47.134 01:21:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:47.134 01:21:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hdKyi9AnGg /tmp/tmp.VjNBfGbfRS 00:27:47.134 01:21:02 keyring_file -- keyring/file.sh@20 -- # killprocess 89317 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 89317 ']' 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 89317 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89317 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89317' 00:27:47.134 killing process with pid 89317 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@967 -- # kill 89317 00:27:47.134 Received shutdown signal, test time was about 1.000000 seconds 00:27:47.134 00:27:47.134 Latency(us) 00:27:47.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.134 =================================================================================================================== 00:27:47.134 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:47.134 01:21:02 keyring_file -- common/autotest_common.sh@972 -- # wait 89317 00:27:47.389 01:21:03 keyring_file -- keyring/file.sh@21 -- # killprocess 87790 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 87790 ']' 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 87790 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87790 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87790' 00:27:47.389 killing process with pid 87790 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@967 -- # kill 87790 00:27:47.389 [2024-07-16 01:21:03.208104] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:47.389 01:21:03 keyring_file -- common/autotest_common.sh@972 -- # wait 87790 00:27:47.646 00:27:47.646 real 0m14.114s 00:27:47.646 user 0m35.077s 00:27:47.646 sys 0m3.352s 00:27:47.646 01:21:03 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.646 01:21:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:47.646 ************************************ 00:27:47.646 END TEST keyring_file 00:27:47.646 ************************************ 00:27:47.646 01:21:03 -- common/autotest_common.sh@1142 -- # return 0 00:27:47.646 01:21:03 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:47.646 01:21:03 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:47.646 01:21:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:47.646 01:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.646 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:27:47.904 ************************************ 00:27:47.904 START TEST keyring_linux 00:27:47.904 ************************************ 00:27:47.904 01:21:03 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:47.904 * Looking for test storage... 00:27:47.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:47.904 01:21:03 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:47.904 01:21:03 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.904 01:21:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.905 01:21:03 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.905 01:21:03 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.905 01:21:03 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.905 01:21:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.905 01:21:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.905 01:21:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.905 01:21:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:47.905 01:21:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:47.905 /tmp/:spdk-test:key0 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:47.905 01:21:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:47.905 01:21:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:47.905 /tmp/:spdk-test:key1 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=89738 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:47.905 01:21:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 89738 00:27:47.905 01:21:03 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 89738 ']' 00:27:47.905 01:21:03 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.905 01:21:03 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.905 01:21:03 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.905 01:21:03 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.905 01:21:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:47.905 [2024-07-16 01:21:03.860902] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:27:47.905 [2024-07-16 01:21:03.861003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89738 ] 00:27:47.905 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.163 [2024-07-16 01:21:03.919054] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.163 [2024-07-16 01:21:04.023818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:48.420 01:21:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:48.420 [2024-07-16 01:21:04.272451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.420 null0 00:27:48.420 [2024-07-16 01:21:04.304507] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:48.420 [2024-07-16 01:21:04.305016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.420 01:21:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:48.420 841972869 00:27:48.420 01:21:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:48.420 601732092 00:27:48.420 01:21:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=89867 00:27:48.420 01:21:04 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:48.420 01:21:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 89867 /var/tmp/bperf.sock 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 89867 ']' 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.420 01:21:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:48.420 [2024-07-16 01:21:04.368368] Starting SPDK v24.09-pre git sha1 fd0bbcfdd / DPDK 24.03.0 initialization... 00:27:48.420 [2024-07-16 01:21:04.368445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89867 ] 00:27:48.420 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.713 [2024-07-16 01:21:04.426874] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.713 [2024-07-16 01:21:04.533758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.713 01:21:04 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.713 01:21:04 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:48.713 01:21:04 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:48.713 01:21:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:48.969 01:21:04 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:48.970 01:21:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:49.226 01:21:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:49.226 01:21:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:49.483 [2024-07-16 01:21:05.401909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:49.483 nvme0n1 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:49.739 01:21:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:49.739 01:21:05 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:49.739 01:21:05 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:49.739 01:21:05 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:49.739 01:21:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:49.996 01:21:05 keyring_linux -- keyring/linux.sh@25 -- # sn=841972869 00:27:49.996 01:21:05 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:49.996 01:21:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:49.996 01:21:05 keyring_linux -- keyring/linux.sh@26 -- # [[ 841972869 == \8\4\1\9\7\2\8\6\9 ]] 00:27:49.996 01:21:05 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 841972869 00:27:49.996 01:21:05 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:49.996 01:21:05 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:50.253 Running I/O for 1 seconds... 00:27:51.185 00:27:51.185 Latency(us) 00:27:51.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.185 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:51.185 nvme0n1 : 1.01 8321.43 32.51 0.00 0.00 15273.42 7864.32 23398.78 00:27:51.185 =================================================================================================================== 00:27:51.185 Total : 8321.43 32.51 0.00 0.00 15273.42 7864.32 23398.78 00:27:51.185 0 00:27:51.185 01:21:07 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:51.185 01:21:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:51.442 01:21:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:51.442 01:21:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:51.442 01:21:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:51.442 01:21:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:51.442 01:21:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.442 01:21:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:51.700 01:21:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:51.700 01:21:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:51.700 01:21:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:51.700 01:21:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.700 01:21:07 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:51.700 01:21:07 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.700 01:21:07 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:51.700 01:21:07 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.700 01:21:07 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:51.700 01:21:07 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.700 01:21:07 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.700 01:21:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.958 [2024-07-16 01:21:07.868697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:51.958 [2024-07-16 01:21:07.869094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18352b0 (107): Transport endpoint is not connected 00:27:51.958 [2024-07-16 01:21:07.870086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18352b0 (9): Bad file descriptor 00:27:51.958 [2024-07-16 01:21:07.871085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:51.958 [2024-07-16 01:21:07.871106] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:51.958 [2024-07-16 01:21:07.871120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:51.958 request: 00:27:51.958 { 00:27:51.958 "name": "nvme0", 00:27:51.958 "trtype": "tcp", 00:27:51.958 "traddr": "127.0.0.1", 00:27:51.958 "adrfam": "ipv4", 00:27:51.958 "trsvcid": "4420", 00:27:51.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.958 "prchk_reftag": false, 00:27:51.958 "prchk_guard": false, 00:27:51.958 "hdgst": false, 00:27:51.958 "ddgst": false, 00:27:51.958 "psk": ":spdk-test:key1", 00:27:51.959 "method": "bdev_nvme_attach_controller", 00:27:51.959 "req_id": 1 00:27:51.959 } 00:27:51.959 Got JSON-RPC error response 00:27:51.959 response: 00:27:51.959 { 00:27:51.959 "code": -5, 00:27:51.959 "message": "Input/output error" 00:27:51.959 } 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@33 -- # sn=841972869 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 841972869 00:27:51.959 1 links removed 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@33 -- # sn=601732092 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 601732092 00:27:51.959 1 links removed 00:27:51.959 01:21:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 89867 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 89867 ']' 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 89867 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89867 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89867' 00:27:51.959 killing process with pid 89867 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@967 -- # kill 89867 00:27:51.959 Received shutdown signal, test time was about 1.000000 seconds 00:27:51.959 00:27:51.959 Latency(us) 00:27:51.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.959 =================================================================================================================== 00:27:51.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.959 01:21:07 keyring_linux -- common/autotest_common.sh@972 -- # wait 89867 00:27:52.217 01:21:08 keyring_linux -- keyring/linux.sh@42 -- # killprocess 89738 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 89738 ']' 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 89738 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89738 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89738' 00:27:52.217 killing process with pid 89738 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@967 -- # kill 89738 00:27:52.217 01:21:08 keyring_linux -- common/autotest_common.sh@972 -- # wait 89738 00:27:52.782 00:27:52.782 real 0m4.971s 00:27:52.782 user 0m9.426s 00:27:52.782 sys 0m1.652s 00:27:52.782 01:21:08 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.782 01:21:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:52.782 ************************************ 00:27:52.782 END TEST keyring_linux 00:27:52.782 ************************************ 00:27:52.782 01:21:08 -- common/autotest_common.sh@1142 -- # return 0 00:27:52.782 01:21:08 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:52.782 01:21:08 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:52.782 01:21:08 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:52.782 01:21:08 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:52.782 01:21:08 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:52.782 01:21:08 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:52.782 01:21:08 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:52.782 01:21:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.782 01:21:08 -- common/autotest_common.sh@10 -- # set +x 00:27:52.782 01:21:08 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:52.782 01:21:08 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:52.782 01:21:08 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:52.782 01:21:08 -- common/autotest_common.sh@10 -- # set +x 00:27:54.731 INFO: APP EXITING 00:27:54.731 INFO: killing all VMs 00:27:54.731 INFO: killing vhost app 00:27:54.731 INFO: EXIT DONE 00:27:55.665 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:55.665 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:55.665 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:55.665 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:55.665 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:55.665 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:55.665 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:55.665 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:55.665 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:27:55.665 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:55.665 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:55.665 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:55.665 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:55.665 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:55.665 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:55.665 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:55.923 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:57.296 Cleaning 00:27:57.296 Removing: /var/run/dpdk/spdk0/config 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:57.296 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:57.296 Removing: /var/run/dpdk/spdk1/config 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:57.296 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:57.296 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:57.296 Removing: /var/run/dpdk/spdk2/config 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:57.297 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:57.297 Removing: /var/run/dpdk/spdk3/config 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:57.297 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:57.297 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:57.297 Removing: /var/run/dpdk/spdk4/config 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:57.297 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:57.297 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:57.297 Removing: /dev/shm/bdev_svc_trace.1 00:27:57.297 Removing: /dev/shm/nvmf_trace.0 00:27:57.297 Removing: /dev/shm/spdk_tgt_trace.pid4026002 00:27:57.297 Removing: /var/run/dpdk/spdk0 00:27:57.297 Removing: /var/run/dpdk/spdk1 00:27:57.297 Removing: /var/run/dpdk/spdk2 00:27:57.297 Removing: /var/run/dpdk/spdk3 00:27:57.297 Removing: /var/run/dpdk/spdk4 00:27:57.297 Removing: /var/run/dpdk/spdk_pid12494 00:27:57.297 Removing: /var/run/dpdk/spdk_pid12630 00:27:57.297 Removing: /var/run/dpdk/spdk_pid12765 00:27:57.297 Removing: /var/run/dpdk/spdk_pid13152 00:27:57.297 Removing: /var/run/dpdk/spdk_pid13159 00:27:57.297 Removing: /var/run/dpdk/spdk_pid15918 00:27:57.297 Removing: /var/run/dpdk/spdk_pid16256 00:27:57.297 Removing: /var/run/dpdk/spdk_pid18906 00:27:57.297 Removing: /var/run/dpdk/spdk_pid20786 00:27:57.297 Removing: /var/run/dpdk/spdk_pid24323 00:27:57.297 Removing: /var/run/dpdk/spdk_pid28405 00:27:57.297 Removing: /var/run/dpdk/spdk_pid3140 00:27:57.297 Removing: /var/run/dpdk/spdk_pid34628 00:27:57.297 Removing: /var/run/dpdk/spdk_pid363 00:27:57.297 Removing: /var/run/dpdk/spdk_pid39088 00:27:57.297 Removing: /var/run/dpdk/spdk_pid39094 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4024458 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4025182 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4026002 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4026438 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4027128 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4027266 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4027984 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4027995 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4028239 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4029432 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4030463 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4030662 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4030941 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4031171 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4031359 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4031514 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4031668 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4031856 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4032175 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4034527 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4034689 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4034888 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4034980 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4035285 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4035410 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4035725 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4035843 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4036022 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4036043 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4036305 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4036325 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4036689 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4036866 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4037157 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4037327 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4037364 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4037538 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4037699 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4037858 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4038070 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4038284 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4038451 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4038604 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4038876 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4039034 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4039190 00:27:57.297 Removing: /var/run/dpdk/spdk_pid4039464 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4039617 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4039786 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4039992 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4040212 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4040371 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4040532 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4040801 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4040966 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4041126 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4041401 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4041471 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4041675 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4043848 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4070120 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4072731 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4079702 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4082966 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4085234 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4085760 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4089602 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4094057 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4094059 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4094716 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4095253 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4095918 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4096315 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4096321 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4096509 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4096596 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4096606 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4097260 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4097915 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4098524 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4098873 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4098977 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4099124 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4100013 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4100729 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4106226 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4106496 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4109023 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4112724 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4114891 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4121273 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4126975 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4128287 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4128955 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4139045 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4141243 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4165777 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4168561 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4169739 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4171057 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4171193 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4171220 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4171350 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4171787 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4173106 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4173708 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4174135 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4175754 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4176174 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4176618 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4179248 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4185773 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4188424 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4192201 00:27:57.555 Removing: /var/run/dpdk/spdk_pid4193247 00:27:57.555 Removing: /var/run/dpdk/spdk_pid51031 00:27:57.555 Removing: /var/run/dpdk/spdk_pid51556 00:27:57.555 Removing: /var/run/dpdk/spdk_pid51965 00:27:57.555 Removing: /var/run/dpdk/spdk_pid52371 00:27:57.555 Removing: /var/run/dpdk/spdk_pid52955 00:27:57.555 Removing: /var/run/dpdk/spdk_pid53359 00:27:57.555 Removing: /var/run/dpdk/spdk_pid53782 00:27:57.555 Removing: /var/run/dpdk/spdk_pid54293 00:27:57.555 Removing: /var/run/dpdk/spdk_pid5502 00:27:57.555 Removing: /var/run/dpdk/spdk_pid56676 00:27:57.555 Removing: /var/run/dpdk/spdk_pid56935 00:27:57.555 Removing: /var/run/dpdk/spdk_pid61341 00:27:57.555 Removing: /var/run/dpdk/spdk_pid61447 00:27:57.555 Removing: /var/run/dpdk/spdk_pid63130 00:27:57.555 Removing: /var/run/dpdk/spdk_pid68049 00:27:57.555 Removing: /var/run/dpdk/spdk_pid68054 00:27:57.555 Removing: /var/run/dpdk/spdk_pid70963 00:27:57.555 Removing: /var/run/dpdk/spdk_pid72365 00:27:57.555 Removing: /var/run/dpdk/spdk_pid73774 00:27:57.555 Removing: /var/run/dpdk/spdk_pid74634 00:27:57.555 Removing: /var/run/dpdk/spdk_pid76046 00:27:57.555 Removing: /var/run/dpdk/spdk_pid76931 00:27:57.555 Removing: /var/run/dpdk/spdk_pid82339 00:27:57.555 Removing: /var/run/dpdk/spdk_pid82722 00:27:57.555 Removing: /var/run/dpdk/spdk_pid83114 00:27:57.555 Removing: /var/run/dpdk/spdk_pid84560 00:27:57.555 Removing: /var/run/dpdk/spdk_pid84958 00:27:57.555 Removing: /var/run/dpdk/spdk_pid85355 00:27:57.555 Removing: /var/run/dpdk/spdk_pid87790 00:27:57.555 Removing: /var/run/dpdk/spdk_pid87806 00:27:57.555 Removing: /var/run/dpdk/spdk_pid89317 00:27:57.555 Removing: /var/run/dpdk/spdk_pid89738 00:27:57.555 Removing: /var/run/dpdk/spdk_pid89867 00:27:57.555 Removing: /var/run/dpdk/spdk_pid9707 00:27:57.555 Removing: /var/run/dpdk/spdk_pid9709 00:27:57.555 Clean 00:27:57.813 01:21:13 -- common/autotest_common.sh@1451 -- # return 0 00:27:57.813 01:21:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:57.813 01:21:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.813 01:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.813 01:21:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:57.813 01:21:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.813 01:21:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.813 01:21:13 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:57.813 01:21:13 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:57.813 01:21:13 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:57.813 01:21:13 -- spdk/autotest.sh@391 -- # hash lcov 00:27:57.813 01:21:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:57.813 01:21:13 -- spdk/autotest.sh@393 -- # hostname 00:27:57.813 01:21:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:58.071 geninfo: WARNING: invalid characters removed from testname! 00:28:30.154 01:21:41 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:30.154 01:21:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:32.693 01:21:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:35.986 01:21:51 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:38.544 01:21:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:41.841 01:21:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:44.420 01:22:00 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:44.420 01:22:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.420 01:22:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:44.420 01:22:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.420 01:22:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.420 01:22:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.420 01:22:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.420 01:22:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.420 01:22:00 -- paths/export.sh@5 -- $ export PATH 00:28:44.420 01:22:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.420 01:22:00 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:44.420 01:22:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:44.420 01:22:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721085720.XXXXXX 00:28:44.420 01:22:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721085720.MDZjF9 00:28:44.420 01:22:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:44.420 01:22:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:44.420 01:22:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:44.420 01:22:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:44.420 01:22:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:44.420 01:22:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:44.420 01:22:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:44.420 01:22:00 -- common/autotest_common.sh@10 -- $ set +x 00:28:44.420 01:22:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:44.420 01:22:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:44.420 01:22:00 -- pm/common@17 -- $ local monitor 00:28:44.420 01:22:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.420 01:22:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.420 01:22:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.420 01:22:00 -- pm/common@21 -- $ date +%s 00:28:44.420 01:22:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.420 01:22:00 -- pm/common@21 -- $ date +%s 00:28:44.420 01:22:00 -- pm/common@25 -- $ sleep 1 00:28:44.420 01:22:00 -- pm/common@21 -- $ date +%s 00:28:44.420 01:22:00 -- pm/common@21 -- $ date +%s 00:28:44.420 01:22:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721085720 00:28:44.420 01:22:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721085720 00:28:44.420 01:22:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721085720 00:28:44.420 01:22:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721085720 00:28:44.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721085720_collect-vmstat.pm.log 00:28:44.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721085720_collect-cpu-load.pm.log 00:28:44.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721085720_collect-cpu-temp.pm.log 00:28:44.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721085720_collect-bmc-pm.bmc.pm.log 00:28:45.358 01:22:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:45.358 01:22:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:45.358 01:22:01 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:45.358 01:22:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:45.358 01:22:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:45.358 01:22:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:45.358 01:22:01 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:45.358 01:22:01 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:45.358 01:22:01 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:45.358 01:22:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:45.358 01:22:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:45.358 01:22:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:45.358 01:22:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:45.358 01:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.358 01:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:45.358 01:22:01 -- pm/common@44 -- $ pid=99983 00:28:45.358 01:22:01 -- pm/common@50 -- $ kill -TERM 99983 00:28:45.358 01:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.358 01:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:45.358 01:22:01 -- pm/common@44 -- $ pid=99985 00:28:45.358 01:22:01 -- pm/common@50 -- $ kill -TERM 99985 00:28:45.358 01:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.358 01:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:45.358 01:22:01 -- pm/common@44 -- $ pid=99987 00:28:45.358 01:22:01 -- pm/common@50 -- $ kill -TERM 99987 00:28:45.358 01:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.358 01:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:45.358 01:22:01 -- pm/common@44 -- $ pid=100017 00:28:45.358 01:22:01 -- pm/common@50 -- $ sudo -E kill -TERM 100017 00:28:45.358 + [[ -n 3940623 ]] 00:28:45.358 + sudo kill 3940623 00:28:45.368 [Pipeline] } 00:28:45.384 [Pipeline] // stage 00:28:45.389 [Pipeline] } 00:28:45.403 [Pipeline] // timeout 00:28:45.408 [Pipeline] } 00:28:45.421 [Pipeline] // catchError 00:28:45.425 [Pipeline] } 00:28:45.437 [Pipeline] // wrap 00:28:45.442 [Pipeline] } 00:28:45.455 [Pipeline] // catchError 00:28:45.464 [Pipeline] stage 00:28:45.466 [Pipeline] { (Epilogue) 00:28:45.481 [Pipeline] catchError 00:28:45.483 [Pipeline] { 00:28:45.497 [Pipeline] echo 00:28:45.499 Cleanup processes 00:28:45.504 [Pipeline] sh 00:28:45.781 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:45.781 100131 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:45.781 100247 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:45.795 [Pipeline] sh 00:28:46.076 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:46.076 ++ grep -v 'sudo pgrep' 00:28:46.076 ++ awk '{print $1}' 00:28:46.076 + sudo kill -9 100131 00:28:46.087 [Pipeline] sh 00:28:46.369 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:54.487 [Pipeline] sh 00:28:54.770 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:54.770 Artifacts sizes are good 00:28:54.783 [Pipeline] archiveArtifacts 00:28:54.789 Archiving artifacts 00:28:55.018 [Pipeline] sh 00:28:55.298 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:55.312 [Pipeline] cleanWs 00:28:55.321 [WS-CLEANUP] Deleting project workspace... 00:28:55.321 [WS-CLEANUP] Deferred wipeout is used... 00:28:55.327 [WS-CLEANUP] done 00:28:55.330 [Pipeline] } 00:28:55.348 [Pipeline] // catchError 00:28:55.356 [Pipeline] sh 00:28:55.631 + logger -p user.info -t JENKINS-CI 00:28:55.639 [Pipeline] } 00:28:55.653 [Pipeline] // stage 00:28:55.657 [Pipeline] } 00:28:55.669 [Pipeline] // node 00:28:55.672 [Pipeline] End of Pipeline 00:28:55.701 Finished: SUCCESS